Topics

Product approach and responses


 

As it stands today, but also as it will be in the future, there are some use cases where FlickType is unlikely to be a meaningfully better option than the default keyboard. Fundamentally, these are the cases where you need to enter very little text. Good examples are typing or dictating a single word to reply to a message, or even typing just one character to bring up a list of recipients when composing an email. Roughly speaking, in these cases FlickType will need to not require any extra effort like it currently does, such as dismissing it in order to interact with the app. Even a single gesture is too much, and will be a constant reminder of friction.

It is quite straightforward to develop that parity, and a somewhat known quantity; but it will still take some time, time that comes at the expense of not doing other things. It would be overly optimistic to assume that we can complete every single feature that everyone has already mentioned and do so in a short time, so it's always important to know that we are working on what are the most impactful things at any one time.

As such, I believe it's best to first concentrate on the use cases where FlickType really shines. That's when you write your longer thoughts in an extended email, or when you put down a lot of information in a note to reference later, or when you need to make frequent and multiple edits to a long document. I think prioritizing this way is what can help FlickType become more successful within the blind and low-vision community and eventually mature to the level described earlier. So this is primarily what will drive the decisions for what to develop next and what to push aside to come back to when the core typing experience feels solid enough. That said, just about everything discussed so far is something that we plan to eventually address one way or another, and a lot of it might be coming out sooner that you'd expect.

Responding to some of your earlier questions and points:

As you have mentioned, there are many possible gestures available to map to the different common functions required when typing. The flick and hold gestures are not available as standard iOS gestures, so I will have to write some custom gesture code soon. Additionally this code will address the current delay of the tap feedback, as well as the lack of new line entry which will probably be flick right and hold. After that, if desired it will be relatively trivial to create a few new different gesture mappings or even make them fully customizable and hear what you feel works best, and use that as default. Note that there are some gestures that are not possible to use while typing, such as the double tap and hold which would be impossible to distinguish from a regular tap that is quickly followed by a single tap & hold.

Making flick actions be location-dependent is I think a steep requirement for new users. This might be why they don't exist in VoiceOver. Please correct me if I'm wrong.

Being able to manually type words like names, as well as using and adding to your custom dictionary are definitely required and coming soon. Emoji is most likely part of that too. The only reason the tap & hold currently drops you out to the iOS keyboard is literally because there’s no manual entry yet, smile.

Locked landscape mode is definitely useful and a planned feature.

Ed, you mentioned, quote: "One finger gestures seem to be making the interface unusually complicated to use". Are you referring to the current interface, or the interface as it would be if there were more functions added using single finger gestures? If it's the current interface, could you please elaborate on which finger gesture you'd be ok to lose?

Finally, if there's any question I missed, please do let me know. I'd be happy to hear it again and address it.

Thank you all for your support, more to come soon! Smile.

- Kosta


George Cham
 

I’ve noticed, with this latest build if you type a word he, then swipe left to delete the word remains, if you swipe again its deleted

Kind Regards,

George Cham


________________________________
From: alpha@flicktype.groups.io <alpha@flicktype.groups.io> on behalf of FlickType <@FlickType>
Sent: Wednesday, May 2, 2018 2:34:16 PM
To: alpha@flicktype.groups.io
Subject: [FlickType-alpha] Product approach and responses

As it stands today, but also as it will be in the future, there are some use cases where FlickType is unlikely to be a meaningfully better option than the default keyboard. Fundamentally, these are the cases where you need to enter very little text. Good examples are typing or dictating a single word to reply to a message, or even typing just one character to bring up a list of recipients when composing an email. Roughly speaking, in these cases FlickType will need to not require any extra effort like it currently does, such as dismissing it in order to interact with the app. Even a single gesture is too much, and will be a constant reminder of friction.

It is quite straightforward to develop that parity, and a somewhat known quantity; but it will still take some time, time that comes at the expense of not doing other things. It would be overly optimistic to assume that we can complete every single feature that everyone has already mentioned and do so in a short time, so it's always important to know that we are working on what are the most impactful things at any one time.

As such, I believe it's best to first concentrate on the use cases where FlickType really shines. That's when you write your longer thoughts in an extended email, or when you put down a lot of information in a note to reference later, or when you need to make frequent and multiple edits to a long document. I think prioritizing this way is what can help FlickType become more successful within the blind and low-vision community and eventually mature to the level described earlier. So this is primarily what will drive the decisions for what to develop next and what to push aside to come back to when the core typing experience feels solid enough. That said, just about everything discussed so far is something that we plan to eventually address one way or another, and a lot of it might be coming out sooner that you'd expect.

Responding to some of your earlier questions and points:

As you have mentioned, there are many possible gestures available to map to the different common functions required when typing. The flick and hold gestures are not available as standard iOS gestures, so I will have to write some custom gesture code soon. Additionally this code will address the current delay of the tap feedback, as well as the lack of new line entry which will probably be flick right and hold. After that, if desired it will be relatively trivial to create a few new different gesture mappings or even make them fully customizable and hear what you feel works best, and use that as default. Note that there are some gestures that are not possible to use while typing, such as the double tap and hold which would be impossible to distinguish from a regular tap that is quickly followed by a single tap & hold.

Making flick actions be location-dependent is I think a steep requirement for new users. This might be why they don't exist in VoiceOver. Please correct me if I'm wrong.

Being able to manually type words like names, as well as using and adding to your custom dictionary are definitely required and coming soon. Emoji is most likely part of that too. The only reason the tap & hold currently drops you out to the iOS keyboard is literally because there’s no manual entry yet, smile.

Locked landscape mode is definitely useful and a planned feature.

Ed, you mentioned, quote: "One finger gestures seem to be making the interface unusually complicated to use". Are you referring to the current interface, or the interface as it would be if there were more functions added using single finger gestures? If it's the current interface, could you please elaborate on which finger gesture you'd be ok to lose?

Finally, if there's any question I missed, please do let me know. I'd be happy to hear it again and address it.

Thank you all for your support, more to come soon! Smile.

- Kosta


Ed Worrell
 

Hey Kosta,

What I mend by that is I feel that the multi finger gestures like the two finger swipe down to insert a new line does not interfere with standard typing gestures. I could see where a user with other disabilities could indirectly activate some of the one finger gestures and not mean to. Like you said in the last email for a new line you might try a one finger swipe right and hold. This might be difficult for some users as they could simply be trying to proceed to the next word.

If you can I think the controls should mimic the stand alone FlickType application. Those gestures work great. To add manual typing back into the mix to switch back to the system keyboard you could perform a two finger tap and hold to switch. This would allow you to use one finger to hunt for the keys you are looking for.

Just my thoughts, thanks again and great work.

Ed Worrell

On May 1, 2018, at 10:34 PM, FlickType <@FlickType> wrote:

As it stands today, but also as it will be in the future, there are some use cases where FlickType is unlikely to be a meaningfully better option than the default keyboard. Fundamentally, these are the cases where you need to enter very little text. Good examples are typing or dictating a single word to reply to a message, or even typing just one character to bring up a list of recipients when composing an email. Roughly speaking, in these cases FlickType will need to not require any extra effort like it currently does, such as dismissing it in order to interact with the app. Even a single gesture is too much, and will be a constant reminder of friction.

It is quite straightforward to develop that parity, and a somewhat known quantity; but it will still take some time, time that comes at the expense of not doing other things. It would be overly optimistic to assume that we can complete every single feature that everyone has already mentioned and do so in a short time, so it's always important to know that we are working on what are the most impactful things at any one time.

As such, I believe it's best to first concentrate on the use cases where FlickType really shines. That's when you write your longer thoughts in an extended email, or when you put down a lot of information in a note to reference later, or when you need to make frequent and multiple edits to a long document. I think prioritizing this way is what can help FlickType become more successful within the blind and low-vision community and eventually mature to the level described earlier. So this is primarily what will drive the decisions for what to develop next and what to push aside to come back to when the core typing experience feels solid enough. That said, just about everything discussed so far is something that we plan to eventually address one way or another, and a lot of it might be coming out sooner that you'd expect.

Responding to some of your earlier questions and points:

As you have mentioned, there are many possible gestures available to map to the different common functions required when typing. The flick and hold gestures are not available as standard iOS gestures, so I will have to write some custom gesture code soon. Additionally this code will address the current delay of the tap feedback, as well as the lack of new line entry which will probably be flick right and hold. After that, if desired it will be relatively trivial to create a few new different gesture mappings or even make them fully customizable and hear what you feel works best, and use that as default. Note that there are some gestures that are not possible to use while typing, such as the double tap and hold which would be impossible to distinguish from a regular tap that is quickly followed by a single tap & hold.

Making flick actions be location-dependent is I think a steep requirement for new users. This might be why they don't exist in VoiceOver. Please correct me if I'm wrong.

Being able to manually type words like names, as well as using and adding to your custom dictionary are definitely required and coming soon. Emoji is most likely part of that too. The only reason the tap & hold currently drops you out to the iOS keyboard is literally because there’s no manual entry yet, smile.

Locked landscape mode is definitely useful and a planned feature.

Ed, you mentioned, quote: "One finger gestures seem to be making the interface unusually complicated to use". Are you referring to the current interface, or the interface as it would be if there were more functions added using single finger gestures? If it's the current interface, could you please elaborate on which finger gesture you'd be ok to lose?

Finally, if there's any question I missed, please do let me know. I'd be happy to hear it again and address it.

Thank you all for your support, more to come soon! Smile.

- Kosta