No matter what your application, detecting cars, humans, buildings, or anything else, the possibilities for object detection are endless! The Call for Code 2019 challenge presents a great opportunity for developers to apply machine learning and AI technologies like object detection in unique ways to situations where lives are at stake, like natural disasters. Having the ability to lean on pre-trained and open-source models is a great way to get started quickly, but often specific applications require custom models. Explore the entire development process with us from allocating the necessary IBM Cloud and Watson services, labeling a dataset, training a model, and finally deploying the model to an iOS app using Core ML.
What is object detection? Unlike image recognition, where we train a model to classify specific objects within an image, object detection allows us also to localize objects in an image and provide precise locations. In the upcoming IBM Call for Code Workshop Day during WWDC (June 5, 2019), we’ll enable developers to explore data science and create their own objection detection models with ease by leveraging the power of Watson Machine Learning and Apple’s Core ML framework. Before the event, developers can get their hands on the code through the Create a real-time object detection app using Watson Machine Learning code pattern.
Watson Machine Learning is a service that allows developers to streamline their workflow when developing custom models – in one environment you can build, test, deploy, and optimize models quickly and easily. Within Watson Machine Learning, you can also leverage pre-trained models and open data sets to simplify much of the work and allow you to focus on your core functionality.
When considering ways in which you and your colleagues can engage around Call for Code, consider the power of custom object detection using Watson Machine Learning and Core ML in your iOS apps. Get started now at developer.ibm.com/callforcode. How will you answer the call?