Frequently Asked Questions
How does a serverless architecture differ from a traditional IT architecture -- whether on-premises or already cloud-based?
Serverless is an architecture and set of technologies that move on-demand computing to the next level since a request will trigger the deployment of the function that handles the request itself. Serverless is a misnomer since you still need a listening component (a server), but instead of having a complete server waiting for the request, only an API gateway is required and the API gateway will instantiate the function or microservice needed to process the request.
If limited to that approach, serverless is just an evolution of IT architecture. However, by making the deployment of a function or microservice dynamic, serverless architecture also introduces the notion of fluid software since it is possible to decide where and when the function or microservice will be deployed. Therefore, based on conditions (derived from analytics), it will be possible to deploy the function or microservice closer to the request generator, which could be an edge node.
In this case, serverless architecture is a fundamental transformation since it breaks away from client-server architecture. The shift from legacy architecture will include the following considerations:
- Solutions have to be microservice based.
- There may not be a central component, or the central component may be limited to a discovery service.
- A microservice may run on the same device the application making the request is running.
- Microservices are inherently single-tenant and potentially single-user.
What kinds of services and solutions should managers and professionals turn to build and support their serverless architecture?
It is important to understand extreme decomposition since serverless implies microservices, which then means understanding clusters and cluster management, and then because of the fluidity of the solution it is important to understand extreme distribution: including edge-cloud which modifies the criteria and scope of the cloud-based cluster management (for clustering based on proximity or on user's account). So technology like kubernetes for cluster management, and sidecar patterns like Istio or mimik edgeSDK are important to understand. It is also important to understand automated deployment, since non-human-driven deployment and SCM will be mandatory for the success of a serverless/microservice architecture.
How do security protocols and processes differ in a serverless environment?
The security protocols do not change. However, since serverless-microservice-based solutions are distributed, it is important not to depend on a central trust authority and use peer to peer token validation for API requests. But also not assume that the system's components will be behind a firewall and that the network is untrustworthy. Finally, it is also important to handle multiple levels of security, since sensitive payload may go through relay microservices. For example, user information may go through a tunnel microservice and the call to the tunnel is protected by a token, but it is also necessary to protect the user information by avoiding the tunnel to be able to interpret the information itself.
How does the storage component of serverless stack up to previous architectures? Are there additional considerations required for serverless?
In serverless-microservice-based architectures, each instance has to be stateless. Therefore the storage components are essential in storing states, as opposed to some legacy systems where the states are maintained by non-storage components. Based on the distributed nature of serverless-microservice-based systems, and due to theoretical limitations (CAP Theorem), the storage will most likely be BASE as opposed to ACID legacy storage. Clever partitioning strategies like explicit addressing and geocentric storage have to be put in place in order to cope with the eventual consistency of the system.
Is cloud-based compute power a concern? How can the need for back-end power be addressed in a serverless setting?
In serverless-microservice systems, the computing demand is mediated by the application itself, resulting in a closer fit between the allocated computing power and the used computing power. Due to the dynamic and fluid nature of the systems, it is also possible to offload the required computing power to other computing nodes (like edge devices or gateways) and thereby further optimize the allocation of cloud-like computing power.
edgeSDK Device Connectivity
Is there a need for at least one device to be connected to the internet to create the mesh network? Or will it work even if no devices are connected to the internet?
When the internet is available:
- The user installs the app using the platform-appropriate app store (e.g. Google Play Store, iOS App Store).
- The user registers with the app (meaning that edgeSDK will register the node ID under a specific user's account ID).
- edgeSDK receives a valid token from our back-end services. The token has expiration time depends on the scope of service which could be varied from 24 hours to a couple of days or months.
From this point on, edgeSDK doesn’t need the internet to be available:
- edgeSDK uses the valid token to provide all functionality.
- Devices on the same Wi-Fi network can discover each other using edgeSDK.
- mimik edgeSDK container manager can instantiate any number of required microservices and use edgeSDK services.
- Microservices can communicate among each other, exchanging data.
- Once the token gets invalidated (passed the valid time; please see #3 when the internet is available), edgeSDK requires the Internet to be available to fetch a new valid token.
If all devices in a cluster are already registered, and they are on the same Wi-Fi network, edgeSDK does not require internet access to function.
Does edgeSDK have dependency on a SaaS in the cloud? What are the deployment options for edgeSDK's backend?
Our product is deployed on AWS with multi-region configuration and it uses AWS Load balancer and auto scaling features. Other than that, all edgeSDK components are NodeJS and deployed using Ansible, which let us minimize the effort required to deploy on AWS. All deployments are done via Ansible, which can also be used for on-premise deployment with some modifications in Ansible's script.
In current edgeSDK, do you have implementation enabling P2P communication using of the TCP/UDP hole punching technique or something similar (ex: used by bittorrents, VoIP .. etc)?
We are not using UDP or TCP hole punching as the primary P2P communication due to inconsistency in NAT traversal.
We use UDP multicast for local supernode discovery. For bootstrap registration and other communication, we use HTTPS; for tunneling to BEPs we use Secure WebSocket (WSS) for inbound communication (BEP TO NODE) and HTTPS for outbound communication (NODE TO BEP). In the future, we may consider UDP/TCP hole punching as a secondary mechanism.
What is the security architecture for edge?
Edge contains 3 levels of security:
- Communication encryption (at edgeSDK level communication)
When a node communicates with a supernode, the entire exchange is encrypted using the AES 128 GCM encryption algorithm.
- Payload encryption (at edgeSDK level communication)
In the account cluster use case, the payload is encrypted using the AES 128 GCM encryption algorithm.
- Edge Access Token Authorization
Registered apps must use edge access token to make an API call to edgeSDK.
Please Note: Any other level of security beyond the aforementioned levels need to be managed by the app developers.
- App to edge microservice communication security.
- Edge microservice to edge microservice (link-local) communication security.
Why can't HTTPS be used for edge level security?
It can't be used for a number of reasons, including:
- HTTPS requires a signed certificate.
- A signed certificate requires a valid and registered domain name.
- Saving "certificate private key" on every single link-local node in a secure way is near impossible.
You can encrypt application payload by using any available off-the-shelf security algorithm (e.g. AES 128 GCM).
Since the cloud/fog will be operated by us and the client won't just be any client, but our client specifically, it is possible for example, with wget+apache/nginx+a custom certificate, to have a HTTPS connection, given that wget is instructed to believe the self signed certificate coming from the server. Will the same be possible with the edgeSDK?
Will it be possible to set TCP_NODELAY in edgeSDK optionally?
Yes, but keep in mind that disabling the TCP delay risks causing network congestion.
edgeSDK Network Configuration
Our benchmark stopped at 9500 because after more than 20 seconds on our machine, there was a timeout in the edgeSDK with a 500 status code. Would it be possible to configure this timeout to other values?
Yes you can but we highly recommend that you don't. The 20 second timeout has been deliberately designed this way as part of our edge-container quota management policy. This policy prevents a microservice from monopolizing the edge node's entire CPU time.
The evolution of serverless architecture will make discovery service a key part of the system since one of the main issues will not about whether a service is running but rather where the service is or will be.
Another issue is about the maintainability/optimization of the system, since when a service is down or non-existent, it means that:
- The service could not start.
- The service went down because of a bug.
- The script for deploying the service is faulty.
- The data that is used to trigger the deployment of the service is wrong.
- The inference engine that makes the decision to trigger the deployment is not trained properly.
- It is ok for the service not to run.
Maintaining and debugging serverless-microservice based systems will have to be based on logs (it is impossible to put a breakpoint in a service that actually is not deployed yet) and deep analysis of these logs to identify anomalous patterns. Finally, optimization will be the key.
Was this page helpful?