![]() If we ejected the app, we could probably use react-native-voice. Most of the project requirements we could accomplish within Expo, but one gave us pause: voice search. ![]() When we started a recent React Native project, we weighed using Expo or not. (This is why we are excited about Unimodules and the possibility to use parts of the Expo API.) To Expo or not to Expo All that retesting with likely blow your budget. Halfway through the project, this can be a daunting undertaking. If you need functionality that lands outside of the features in the SDK, you’ll need to eject and rebuild those features with native code or by using an existing package that does this for you ( think “link”). The Expo SDK provides tons of access to system functionality such as the camera, calendar and accelerometer. When you board the Expo train, you need to be all in. Once you use Expo, it is hard to go back to debugging things that should be simple (like loading fonts) and Googling esoteric errors (although that can’t be totally avoided). Expo also simplifies a time-consuming and tedious build process. Instead of spending time combing through Xcode and Android Studio, we can spend time on user-facing features. Expo is a valuable toolset that removes frustrating layers from the development process and provides easy bridging to device system features. We’ve developed React Native apps with Expo and without. ![]() If you want to cut straight to the example repos, see the Expo React Native app with voice search example and the audio-to-text Google Cloud function example. I will also walk through how the final feature works with some annotated code examples. ![]() In this post, I will outline our thought process behind the development of a voice search feature in a recent React Native project. ![]()
0 Comments
Leave a Reply. |