FormFlow Requests Not Pronounced in Cortana Skills

I am building a Coratana skill by first building a bot using FormFlow. I am discovering my intents and entities using LUIS and passing objects to my FormFlow dialog. If one or more FormFlow fields are not completed, the FormFlow dialog prompts the user to fill in the missing information, but this prompt is not spoken, only displayed on the cortana screen. Is there a way for FormFlow to be able to talk to prompts?

In the screenshot below, the prompt "Do you need an airport transfer?" was just shown and not spoken:

enter image description here

My definition of the formFlow looks like this:

 [Serializable]
public class HotelsQuery
{
    [Prompt("Please enter your {&}")]
    [Optional]
    public string Destination { get; set; }

    [Prompt("Near which Airport")]
    [Optional]
    public string AirportCode { get; set; }

    [Prompt("Do you need airport shuttle?")]
    public string DoYouNeedAirportShutle { get; set; }
}

      

+3


source to share


2 answers


I don't think Speak is currently supported in FormFlow

.

What you can do is added as a workaround IMessageActivityMapper

, which basically aids in auto-writing text.

namespace Code
{
    using Microsoft.Bot.Builder.Dialogs;
    using Microsoft.Bot.Builder.Dialogs.Internals;
    using Microsoft.Bot.Connector;

    /// <summary>
    /// Activity mapper that automatically populates activity.speak for speech enabled channels.
    /// </summary>
    public sealed class TextToSpeakActivityMapper : IMessageActivityMapper
    {
        public IMessageActivity Map(IMessageActivity message)
        {
            // only set the speak if it is not set by the developer.
            var channelCapability = new ChannelCapability(Address.FromActivity(message));

            if (channelCapability.SupportsSpeak() && string.IsNullOrEmpty(message.Speak))
            {
                message.Speak = message.Text;
            }

            return message;
        }
    }
}

      



Then, to use it, you need to register it in the file Global.asax.cs

as:

 var builder = new ContainerBuilder();

 builder
   .RegisterType<TextToSpeakActivityMapper>()
   .AsImplementedInterfaces()
   .SingleInstance();

 builder.Update(Conversation.Container);

      

+4


source


Ezequiel Jadib's answer form helped me decide what I need for my use case. I added some extra lines to set the field InputHint

to ExpectingInput

if text is a question. With this configuration, Cortana automatically listens for my response and I don't need to activate the microphone myself.



public IMessageActivity Map(IMessageActivity message)
{
    // only set the speak if it is not set by the developer.
    var channelCapability = new ChannelCapability(Address.FromActivity(message));

    if (channelCapability.SupportsSpeak() && string.IsNullOrEmpty(message.Speak))
    {
        message.Speak = message.Text;

        // set InputHint to ExpectingInput if text is a question
        var isQuestion = message.Text?.EndsWith("?");
        if (isQuestion.GetValueOrDefault())
        {
            message.InputHint = InputHints.ExpectingInput;
        }
    }

    return message;
}

      

0


source







All Articles