Poți să folosești AWS Snowball Edge devices in locations like cruise ships, oil rigs, and factory floors with limited to no network connectivity for a wide range of machine learning (ML) applications such as surveillance, facial recognition, and industrial inspection. However, given the remote and disconnected nature of these devices, deploying and managing ML models at the edge is often difficult. With AWS IoT Greengrass și Amazon SageMaker Edge Manager, you can perform ML inference on locally generated data on Snowball Edge devices using cloud-trained ML models. You not only benefit from the low latency and cost savings of running local inference, but also reduce the time and effort required to get ML models to production. You can do all this while continuously monitoring and improving model quality across your Snowball Edge device fleet.
In this post, we talk about how you can use AWS IoT Greengrass version 2.0 or higher and Edge Manager to optimize, secure, monitor, and maintain a simple TensorFlow classification model to classify shipping containers (connex) and people.
Noțiuni de bază
To get started, order a Snowball Edge device (for more information, see Creating an AWS Snowball Edge Job). You can order a Snowball Edge device with an AWS IoT Greengrass validated AMI on it.
After you receive the device, you can use AWS OpsHub for Snow Family or the Snowball Edge client to unlock the device. You can start an Cloud Elastic de calcul Amazon (Amazon EC2) instance with the latest AWS IoT Greengrass installed or use the commands on AWS OpsHub for Snow Family.
Launch and install an AMI with the following requirements, or provide an AMI reference on the Snowball console before ordering and it will be shipped with all libraries and data in the AMI:
- The ML framework of your choice, such as TensorFlow, PyTorch, or MXNet
- Docker (if you intend to use it)
- AWS IoT Greengrass
- Any other libraries you may need
Prepare the AMI at the time of ordering the Snowball Edge device on AWS Snow Family console. For instructions, see Using Amazon EC2 Compute Instances. Aveți și opțiunea update the AMI after Snowball is deployed to your edge location.
Install the latest AWS IoT Greengrass on Snowball Edge
To install AWS IoT Greengrass on your device, complete the following steps:
- Install the latest AWS IoT Greengrass on your Snowball Edge device. Make sure
dev_tools=True
is set to haveggv2 cli
Consultați următorul cod:
We reference the --thing-name
you chose here when we set up Edge Manager.
- Run the following command to test your installation:
- On the AWS IoT console, validate the successfully registered Snowball Edge device with your AWS IoT Greengrass account.
Optimize ML models with Edge Manager
We use Edge Manger to deploy and manage the model on Snowball Edge.
- Install the Edge Manager agent on Snowball Edge using the latest AWS IoT Greengrass.
- Train and store your ML model.
You can train your ML model using any framework of your choice and save it to an Serviciul Amazon de stocare simplă (Amazon S3) bucket. In the following screenshot, we use TensorFlow to train a multi-label model to classify connex and people in an image. The model used here is saved to an S3 bucket by first creating a .tar file.
After the model is saved (TensorFlow Lite in this case), you can start an Amazon SageMaker Neo compilation job of the model and optimize the ML model for Snowball Edge Compute (SBE_C
).
- Pe consola SageMaker, sub deducție în panoul de navigare, alegeți Locuri de compilare.
- Alege Creați job de compilare.
- Give your job a name and create or use an existing role.
If you’re creating a new Gestionarea identității și accesului AWS (IAM) role, ensure that SageMaker has access to the bucket in which the model is saved.
- În Configurare intrare secțiune, pt Localizarea artefactelor modelului, introduceți calea către
model.tar.gz
where you saved the file (in this case,s3://feidemo/tfconnexmodel/connexmodel.tar.gz
). - Pentru Configurarea introducerii datelor, enter the ML model’s input layer (its name and its shape). In this case, it’s called
keras_layer_input
and its shape is [1,224,224,3], so we enter{“keras_layer_input”:[1,224,224,3]}
.
- Pentru Cadru de învățare automată, alege TFLite.
- Pentru Dispozitiv țintă, alege sbe_c.
- Părăsi Opțiuni pentru compilator
- Pentru S3 Locație de ieșire, enter the same location as where your model is saved with the prefix (folder)
output
. De exemplu, intrăms3://feidemo/tfconnexmodel/output
.
- Alege Trimite mesaj to start the compilation job.
Now you create a model deployment package to be used by Edge Manager.
- Pe consola SageMaker, sub Edge Manager, alege Edge packaging jobs.
- Alege Create Edge packaging job.
- În Job properties section, enter the job details.
- În Sursa modelului secțiune, pt Compilation job name, enter the name you provided for the Neo compilation job.
- Alege Pagina Următoare →.
- În Configurare ieșire secțiune, pt S3 bucket URI, enter where you want to store the package in Amazon S3.
- Pentru Denumirea componentei, enter a name for your AWS IoT Greengrass component.
This step creates an AWS IoT Greengrass model component where the model is downloaded from Amazon S3 and uncompressed to local storage on Snowball Edge.
- Create a device fleet to manage a group of devices, in this case, just one (SBE).
- Pentru Rolul IAM¸ enter the role generated by AWS IoT Greengrass earlier (–tes-role-name).
Make sure it has the required permissions by going to IAM console, searching for the role, and adding the required policies to it.
- Register the Snowball Edge device to the fleet you created.
- În Device source section, enter the device name. The IoT name needs to match the name you used earlier—in this case, –thing-name MyGreengrassCore.
You can register additional Snowball devices on the SageMaker console to add them to the device fleet, which allows you to group and manage these devices together.
Deploy ML models to Snowball Edge using AWS IoT Greengrass
In the previous sections, you unlocked and configured your Snowball Edge device. The ML model is now compiled and optimized for performance on Snowball Edge. An Edge Manager package is created with the compiled model and the Snowball device is registered to a fleet. In this section, you look at the steps involved in deploying the ML model for inference to Snowball Edge with the latest AWS IoT Greengrass.
Componente
AWS IoT Greengrass allows you to deploy to edge devices as a combination of components and associated artifacts. Components are JSON documents that contain the metadata, the lifecycle, what to deploy when, and what to install. Components also define what operating system to use and what artifacts to use when running on different OS options.
Artefactele
Artifacts can be code files, models, or container images. For example, a component can be defined to install a pandas Python library and run a code file that will transform the data, or to install a TensorFlow library and run the model for inference. The following are example artifacts needed for an inference application deployment:
- gRPC proto and Python stubs (this can be different based on your model and framework)
- Python code to load the model and perform inference
These two items are uploaded to an S3 bucket.
Deploy the components
The deployment needs the following components:
- Edge Manager agent (available in public components at GA)
- Model
- aplicație
Complete the following steps to deploy the components:
- On the AWS IoT console, under Iarbă verde, alege Componente, and create the application component.
- Find the Edge Manager agent component in the public components list and deploy it.
- Deploy a model component created by Edge Manager, which is used as a dependency in the application component.
- Deploy the application component to the edge device by going to the list of AWS IoT Greengrass deployments and creating a new deployment.
If you have an existing deployment, you can revise it to add the application component.
Now you can test your component.
- In your prediction or inference code deployed with application component, code in the logic to access files locally on the Snowball Edge device (for example, in the incoming folder) and have the predictions or processed files be moved to a processed folder.
- Log in to the device to see if the predictions have been made.
- Set up the code to run on a loop, checking the incoming folder for new files, processing the files, and moving them to the processed folder.
The following screenshot is an example setup of files before deployment inside the Snowball Edge.
After deployment, all the test images have classes of interest and therefore are moved to the processed folder.
A curăța
To clean up everything or reimplement this solution from scratch, stop all the EC2 instances by invoking the TerminateInstance
API against EC2-compatible endpoints running on your Snowball Edge device. To return your Snowball Edge device, see Powering Off the Snowball Edge și Returning the Snowball Edge Device.
Concluzie
This post walked you through how to order a Snowball Edge device with an AMI of your choice. You then compile a model for the edge using SageMaker, package that model using Edge Manager, and create and run components with artifacts to perform ML inference on Snowball Edge using the latest AWS IoT Greengrass. With Edge Manager, you can deploy and update your ML models on a fleet of Snowball Edge devices, and monitor performance at the edge with saved input and prediction data on Amazon S3. You can also run these components as long-running AWS Lambdas functions that can spin up a model and wait for data to do inference.
You combine several features of AWS IoT Greengrass to create an MQTT client and use a pub/sub model to invoke other services or microservices. The possibilities are endless.
By running ML inference on Snowball Edge with Edge Manager and AWS IoT Greengrass, you can optimize, secure, monitor, and maintain ML models on fleets of Snowball Edge devices. Thanks for reading and please do not hesitate to leave questions or comments in the comments section.
To learn more about AWS Snow Family, AWS IoT Greengrass, and Edge Manager, check out the following:
Despre Autori
Raj Kadiyala este manager de dezvoltare a afacerilor AI/ML Tech în AWS WWPS Partner Organization. Raj are peste 12 ani de experiență în învățarea automată și îi place să-și petreacă timpul liber explorând învățarea automată pentru soluții practice de zi cu zi și rămânând activ în aer liber din Colorado.
Nida Beig is a Sr. Product Manager – Tech at Amazon Web Services where she works on the AWS Snow Family team. She is passionate about understanding customer needs, and using technology as a conductor of transformative thinking to deliver consumer products. Besides work, she enjoys traveling, hiking, and running.
- 100
- 9
- acces
- Cont
- activ
- Suplimentar
- TOATE
- Amazon
- Amazon EC2
- Amazon SageMaker
- Amazon Web Services
- api
- aplicație
- aplicatii
- AWS
- afaceri
- control
- clasificare
- cod
- Colorado
- comentarii
- component
- Calcula
- conductor
- Suport conectare
- consumator
- Produse pentru consumatori
- Recipient
- Containere
- Crearea
- croazieră
- de date
- zi
- Dezvoltare
- Dispozitive
- documente
- Margine
- experienţă
- recunoastere faciala
- fabrică
- familie
- DESCRIERE
- First
- FLOTA
- etaje
- Cadru
- Gratuit
- mare
- Mare în aer liber
- grup
- aici
- drumeții
- Cum
- Cum Pentru a
- HTTPS
- IAM
- Identitate
- imagine
- industrial
- informații
- interes
- implicat
- IoT
- IT
- Java
- Loc de munca
- Ultimele
- AFLAȚI
- învăţare
- Bibliotecă
- Limitat
- Listă
- încărca
- local
- la nivel local
- locaţie
- masina de învățare
- Meci
- ML
- model
- Monitorizarea
- Navigare
- NEO
- reţea
- Ulei
- de operare
- sistem de operare
- Opțiune
- Opţiuni
- comandă
- Altele
- în aer liber
- ambalaje
- partener
- oameni
- performanță
- Politicile
- prezicere
- Predictii
- Produs
- producere
- Produse
- public
- Piton
- pirtorh
- calitate
- gamă
- Citind
- reduce
- Cerinţe
- Alerga
- funcţionare
- sagemaker
- Servicii
- set
- Livrarea
- navelor
- simplu
- zăpadă
- So
- soluţii
- petrece
- Rotire
- Începe
- început
- depozitare
- stoca
- supraveghere
- sistem
- tech
- Tehnologia
- tensorflow
- test
- Gândire
- timp
- Actualizează
- aștepta
- web
- servicii web
- Apartamente
- fabrică
- ani