Questions and Answers
Question 2: How do we use edgeSDK API? From what I’ve seen it’s only JS modules, which use prepared functions on response and etc.
Answer: Once you read the above mentioned documents, and still have uncertainty about how to use the JS to develop microservices, please let us know. We have an open source project and that also might help you toward this purpose:https://github.com/mimikgit/mBeam
Question 3: Is there a need for at least one device to be connected to the internet to create the mesh network? or will it work even if no devices are connected to the internet?
Answer: When the internet is available:
(1) The user install the app using app store (on any platforms e.g. iOS, Android etc.).
(2) The user register app under his/her name (meaning edgeSDK register the nodeId under specific accountId)
(3) edgeSDK receives a valid token from our back-end services (the token has a scope and the scope defines the validation time which could be varied e.g. 24 hours or a couple of days)
From this point on edgeSDK doesn’t need the internet to be available:
(4) edgeSDK uses the valid token to provide all functionalities.
(5) Two (or more) devices under the same Wi-Fi could discover each other using edgeSDK.
(6) mimik container manager can spin up any number of required microservice and use edgeSDK services.
(7) Microservices can communicate among each other exchanging data in the way that we illustrated in the diagrams previously.
(8) Once the token gets invalid (passed the valid time, please see number 3), edgeSDK requires the Internet be available to fetch a new valid token.
If all devices inside of the cluster are already registered, and they are under the same Wi-Fi, in that case edgeSDK doesn’t need internet to function.
Question 4: Does edgeSDK has a dependency on running as a SaaS in the cloud? What are the deployment options for edgeSDK backend?
Answer: The product is deployed in Amazon cloud with multi region configuration and it uses AWS Load balancer and auto scaling features. Other than that, all edgeSDK components are NodeJS. And deployed using Ansible which let us to minimize the required effort required to deploy in Amazon. All deployments are done via Ansible which can also be used for on promise deployment with some modifications in Ansible script.
Question 5: In your current SDK, do you have implementation enabling P2P communication through TCP/UDP hole punching technique or similar (ex: used by bittorrents, VoIP .. etc)?
We are not using UDP or TCP hole punching as the primary p2p
communication b/c of inconsistency in NAT traversal.
We are using UDP multicast for local supernode discovery. for bootstrap registration and other communication we use HTTPS and for tunneling to BEP we use Secure WebSocket (WSS) on inbound (BEP TO NODE) and and HTTPS for outbound (NODE TO BEP). In future we may consider the UDP/TCP hole punching as a secondary mechanism.
Question 7: What is the security architecture for edge?
Answer: Edge contains 3 levels of security:
- Communication encryption (at edgeSDK level communication)
- Payload encryption (at edgeSDK level communication)
- Edge Access Token Authorization
When a node communicates with a supernode, the entire communication is being encrypted using AES 128 GCM encryption algorithm over http protocol.
In account cluster usecase, the payload is being encrypted using AES 128 GCM encryption algorithm.
Registered app must use an edge-access-token in order to make an API call to edgeSDK.
Please Note: Any other level of security beyond the above mentioned levels need to be managed by the app developers.
- App to edge-microservice communication security
- edge-microservice to edge-microservice (link-local) communication security
Question 8: Why can't HTTPS be utilized at the edge level security?
Answer: It can't be utilized for a number of reasons, here are a few reasons why:
- HTTPS requires a signed certificate
- A signed certificated requires a valid and registered domain name
- Saving "certificate private key" on every single link-local node in a secure way is near impossible
You can encrypt application payload by using any available off-the-shelf security algorithm (e.g. using AES 128 GCM)
Question 9: Since the cloud/fog will be operated by us and the client will not be any client, but our client, it is possible for example with wget+apache/nginx+a custom certificate, to have https connection, given that wget is instructed to believe the self signed certificate coming from the server. Will the same be possible with the edgeSDK?
Answer: The short answer is YES, since utilizing HTTPS in this example is completely independent from the edgeSDK. HTTPS can be supported as long as you manage the certificate and IP of the device which can be achieved via standard hardware or software load balancing as per DevOps operation preference for the production environment as per it illustrated in the following diagrams:
Question 10: Will it be possible to set TCP_NODELAY in the edgeSDK optionally?
Answer: Yes, but keep in mind that disabling the TCP delay creates a risk of causing network congestion.
Question 11: Our benchmark stopped at 9500 because after more than 20 seconds on our machine, there was a timeout in the edgeSDK with a 500 status code. Would it be possible to configure this timeout to other values?
Answer: Yes you can but we highly recommend that you don't. The 20 second timeout has been deliberately designed this way as part of our edge-container quota management policy. This policy prevents a microservice to monopolize the entire edge node CPU time.
Question 12: How does a serverless architecture differ from a traditional IT architecture -- whether on-premises or already cloud-based?
Answer: Serverless is an architecture and set of technologies that move on-demand computing to the next level since a request will trigger the deployment of the function that handles the request itself. Serverless is a misnomer since you still need a listening component (a server) but instead of having a complete server waiting for the request, only an API gateway is up and the API gateway will instantiate the function or micro-service needed to process the request.
If limited to that approach, serverless is just an evolution of IT architecture.
However, by making the deployment of function or micro-service dynamic, serverless architecture also introduces the notion of fluid software since it is possible to decide where and when the deployment of the function or micro-service will be done. Therefore, based on conditions (derived from analytics), it will be possible to deploy the function or micro-service closer to the request generator, which could be an edge node.In this case, serverless architecture is a fundamental transformation since it breaks the client-server architecture. The shift from legacy architecture will include the following considerations:
- solutions have to be micro-service based
- there may not be a central component, or the central component may be limited to a discovery service
- micro-services may run on the same device the application making the request is running
- micro-services are inherently single-tenant and potentially single-user
Question 13: What kinds of services and solutions should managers and professionals turn to build and support their serverless architecture?
Answer: It is important to understand extreme decomposition since serverless implies micro-service, which then means understanding clusters and cluster management, and then because of the fluidity of the solution it is important to understand extreme distribution including edge cloud which imodify the criteria and scopes of the cloud based cluster management (cluster based on proximity or cluster based on user account). So technology like kubernetes for cluster management, and sidecar patterns like isteo or mimik edge are important to understand. It is also important to understand automated deployment since non-human driven deployment and SCM will be mandatory for the success of a serverless/micro-service architecture.
Question 14: How do security protocols and processes differ in a serverless environment?
Answer: The security protocols do not change, however, based on the fact that the serverless/micro-service based solutions are much more distributed it is important not to only depend on a central trust authority and use peer to peer token validation for API requests. It is also important not to assume that the system components will be behind a firewall and therefore assume that the network is trustless. Finally, it is also important to handle multiple levels of security, since sensitive payload may go thru relay micro-services. For example, user information may go thru a tunnel micro-service and while the call the tunnel is protected by a token, it is also necessary to protect the user information for avoiding the tunnel to be able to interpret the information itself.
Question 15: How does the storage component of serverless stack up to previous architectures? Are there additional considerations required for serverless?
Answer: In serverless/micro-service architectures, each instance has to be stateless and therefore the storage components are key to store states as opposed to some legacy system where the states where maintain by the non-storage components. Based on the distributed nature of the serverless/micro-service based systems, and due to theoretical limitation (CAP Theorem), the storage will most likely be BASE as opposed to the ACID legacy storage. Clever partitioning strategies like explicit addressing and geocentric storage, have to be put in place in order to cope with the eventual consistency of the system
Question 16: Is cloud-based compute power a concern? How can the need for back-end power be addressed in a serverless setting?
Answer: In serverless/micro-service systems, the computing demand is mediated by the application itself and therefore there is a closer fit between the allocated computing power and the used computing power. Due to the dynamic and fluid nature of the systems, it is also possible to offload the compute power need to other computing nodes like edge devices or gateways and therefore optimize even more the need for cloud computing power.
Any additional thoughts and considerations on building a serverless architecture is welcome!
The evolution of serverless architecture will make discovery service a key part of the system since one of the main issues will not about the fact that a service is running but more about where the service is or will be.
Another issue is about maintainability/optimization of the system since when a service is down or non-existent does this mean that:
- the service could not start,
- the service went down because of a bug,
- the script to deploy the service is faulty.
- the data that is used to trigger the deployment of the service is wrong,
- the inference engine that takes the decision to trigger the deployment is not trained properly.
- it is ok for the service not to run.
Maintaining and debugging serverless/micro-service based systems will have to be based on logs (there is no possibility to put a breakpoint on a service that actually is not deployed yet) and deep analysis of this logs to identify anomaly patterns. Finally, optimization will be key, however similar to the storage systems that are eventually consistent, serverless/microservice based systems should be treated as eventually optimized.