Top five serverless questions

November 3, 2019 devadvin

With all the buzz surrounding projects like Knative and OpenWhisk, we get a lot of questions about serverless. So, we put together some of the most frequently asked questions with corresponding answers to share with you.

1. What’s the relationship between OpenWhisk and Knative?

You may have noticed a lot of excitement in the Kubernetes and serverless spaces around Knative, a new serverless platform built on top of Kubernetes for deploying and managing modern serverless workloads. It’s a relatively new open source project with contributors from several different companies with deep experience in this space, including IBM, Red Hat, Google, and Pivotal. You also may already know about Apache OpenWhisk, an open source serverless platform. IBM is heavily involved in both projects, and we often get asked about the relationship between the two.

OpenWhisk is an open source serverless platform that can execute functions in response to events. To easily use OpenWhisk without setting up and managing your own platform, you can use IBM Cloud Functions. This managed service offers several different features:

  • Out of the box support for many languages, including Node.js, Scala, PHP, Go, and Swift.
  • Easy integration with various services, including Kafka, cloud object storage, IBM Cloudant, and Slack.
  • Optimal utilization of your actions, so you do not pay for idle resources.
  • Decreased time to market, since you do not need to manage your virtual machines or infrastructure resources. You just write code and give it to IBM Cloud Functions to run.

Knative is installed on top of a Kubernetes cluster and extends the capabilities of Kubernetes to provide a serverless experience to users of that cluster. Compared to a platform solution like IBM Cloud Functions, Knative requires more configuration, maintenance, and set up, but that also comes with more control. If you’re already managing a Kubernetes cluster for your applications, adding Knative to the stack is straightforward. For example, on IBM Cloud, you can use the managed Knative add-on to easily install Knative to your cluster and get started.

Both OpenWhisk and Knative are open source projects that deliver the benefits of serverless, from scalability to better utilization of resources to decreased time to market. If you need less control or would just like to use a serverless product out of the box, a solution such as IBM Cloud Functions may be the right solution for you. If you’re already using Kubernetes, or you desire a little more control over your stack, installing Knative may be the right choice.

2. How does one function call another function with IBM Cloud Functions?

The quick answer to chaining your functions together is to use a sequence. A sequence is a special type of action that chains together multiple actions into a string of actions. The result of each action is passed as an argument to the next action in the sequence. To create a sequence chaining together action_1 and action_2, you can use the command line interface (CLI) like this:

ibmcloud fn action create <sequence_name> --sequence <action_1>, <action_2>

Now, when the sequence is fired, it will fire action_1, and then pass the output of action_1 as the input to action_2.

However, your scenario may be a little more complex. For example, if you want action_1 to call action_2 multiple times (once for each item in an array) or maybe you need action_1 to call three other actions in parallel. To directly call one function from another, you can use the OpenWhisk SDK, which is included in the JavaScript runtime. Once you have required the SDK, you can simply invoke the desired action, providing the name of the action and its parameters: You can see an example of how this might work in this code pattern for processing images.

var openwhisk = require('openwhisk');
const ow = openwhisk()
await Promise.all([
    ow.actions.invoke({
      actionName: "/<namespace>/<package_name>/<action_name_1>",
      params: { bucket: params.bucket, url: params.body, key: params.key }
    }),
    ow.actions.invoke({
      actionName: "/<namespace>/<package_name>/<action_name_2>",
      params: { bucket: params.bucket, url: params.body, key: params.key }
    })]);

Each of these two actions will be run in parallel. The OpenWhisk SDK provides several other options for interacting with actions and triggers from your code. You can learn more about the SDK on the npm module page, as well as see an example of how this might work.

You can also chain your actions by accessing the provided HTTP endpoint, which is generated for each function created in IBM Cloud Functions:

curl -u API-KEY -X POST https://us-east.functions.cloud.ibm.com/api/v1/namespaces/beemarie/actions/my_action?blocking=true

By default, functions can be triggered with a POST request, but they can be enabled as web actions, which can be invoked through any of the HTTP methods: POST, GET, PUT, PATCH, DELETE, HEAD, or OPTIONS. Web actions can also be invoked from any web-based app and are associated with the user who created the action, rather than the caller of the action. Once you have turned your functions into web actions, you can assemble them into a full-featured API by using an API gateway to expose your APIs with security, OAuth protocol support, rate limiting, and custom domain support.

3. How do I start migrating my applications to serverless workloads?

This is a question we hear all the time! Serverless architectures can provide significant benefits to your organization, so we understand why you may be excited to begin. Here’s what we recommend:

  • Start by understanding the best use cases for serverless. There are several that lend themselves well to a serverless architecture, such as backend APIs, mobile backends, data processing, scheduled tasks, and event driven applications, to name a few. These are all great use cases to get started with.
  • Don’t try to boil the ocean. Start with a single microservice that makes sense to be run on a serverless platform and aligns with the benefits and goals you hope to achieve. For example, you may be looking for automatic scalability or cost savings based on the usage patterns of your potential action.
  • If your application or microservice can be easily containerized, then you can easily try it out on IBM Cloud Functions or Knative. IBM Cloud Functions provides a Docker runtime, which will simply run the Docker image you specified. Knative’s deployment unit is a container image, so you can easily try your containerized application there as well.
  • Reach out to the (open source) community. There are great communities of people building and shaping OpenWhisk and Knative on a daily basis. As a user, you’re in the perfect position to open issues and interact to make the projects better for everyone. You can join the OpenWhisk community channel on Slack at openwhisk.apache.org/slack.html and find the Knative channel at knative.slack.com.

4. Where do I store my files and install my database?

When your code runs on a serverless platform, it is generally extracted and run in a container environment. That container environment technically does have a filesystem that can be interacted with, but it isn’t a permanent space. It only lives as long as the container lives. When your serverless action is no longer needed, the container running it will be removed.

When building serverless apps, one typical pattern is to use object storage or a database as a managed service from your cloud provider. With object storage, you upload your files on demand and don’t need to worry about the storage infrastructure underneath. This approach fits in nicely with the serverless model. Using a cloud provider service moves you away from the infrastructure and maintenance of database and storage management.

Some of the object storage or database services available may provide event support, enabling an event driven architecture in your serverless applications. For example, IBM Cloud Functions can listen for new items added to your cloud object storage bucket and then fire a trigger as a result.

In general, the serverless approach is to use managed services wherever possible. This enables you to focus on your team’s core competencies instead, such as creating awesome business logic for users!

5. How can I debug serverless functions?

Debugging serverless applications in production can be difficult because there is not access to the runtime environment and infrastructure that is running your code. That said, there are some useful tools available today and likely more on the way as the space continuous to mature:

  • Logs

    Logs written to stdout and stderr are sent to the platform, and typically there’s a logging service integration that collects these logs for you. For example, IBM Cloud Functions supports the IBM Log Analysis with LogDNA service, which collects these logs and enables you to use them to help debug issues.

  • Metrics

    You may be interested to know information like invocation status, any errors in invocation, start and end times for your actions, and the cold start times for your actions. These kinds of high-level metrics are generally built in automatically to your serverless platform. Most cloud providers send this metric data to a monitoring service, where you can set up dashboards to monitor your actions. IBM Cloud Functions sends metrics to IBM Cloud Monitoring and can be viewed using Grafana. Grafana enables you to configure dashboards, crate alerts based on metrics event values, and more.

You may also want to debug during your development process. Recently, the wskdebug tool was introduced to the Apache OpenWhisk community. It supports full debugging of actions, automatic code reloading, auto-invoking of actions on code changes, and more. Right now, it supports Node.js actions out of the box, but other languages can be configured using the command line.

Conclusion

We hope this inspired you to start exploring serverless or continue your serverless journey! To get started with IBM Cloud Functions, register for a free IBM Cloud Lite account. If you’d like to explore Knative a little further, you can create a Kubernetes cluster on IBM Cloud, and then enable the Knative add-on for your cluster.

Belinda Vennam
James Thomas

Previous Article
What are containers and why do you need them?
What are containers and why do you need them?

This blog post discusses containers, why you should care about them, and how they interact with microservic...

Next Article
What’s new in the IBM Z and LinuxONE Community: Master the Mainframe
What’s new in the IBM Z and LinuxONE Community: Master the Mainframe

Student competition returns for the 15th year in a row

×

Want our latest news? Subscribe to our blog!

Last Name
First Name
Thank you!
Error - something went wrong!