{"_id":"com.unity.ai.inference","name":"com.unity.ai.inference","description":"Sentis is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nSentis automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","provider":"upm","versions":{"2.6.1":{"_upm":{"changelog":"### Fixed\n- Documentation fix"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Tokenizer - All Mini LM","description":"An example of using Unity Tokenizer with a All Mini LM tokenizer configuration from Hugging Face.","displayName":"Tokenizer"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.6/manual/index.html","name":"com.unity.ai.inference","version":"2.6.1","displayName":"Sentis","description":"Sentis is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nSentis automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.dt.app-ui":"1.3.6","com.unity.collections":"2.4.3","com.unity.nuget.newtonsoft-json":"3.2.1","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"9a123aee5df7bf5c7b84e4fb3d46470e013f6a36","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.6.1.tgz"}},"2.6.0":{"_upm":{"changelog":"### Added\n- Officially support ONNX opset up to version 25\n- Functional methods for `Swish` and `RMSNorm`\n- Support for the `alpha` argument for the `Swish` operator\n- Improved analytics and error reporting when importing models with unsupported operators\n- Now supporting Fast Enter Play Mode for CoreCLR compatibility\n- Added support for `Buffer` in PyTorch import\n- Tokenizer: Generic truncation with support for `longestfirst`, `onlyfirst` and `onlysecond`\n\n### Changed\n- Now using Unity's `Mathematics.Random` instead of `System.Random`\n- Improved documentation for the `Tensor` and `Functional` APIs\n- Maintenance of documentation links\n- Updated documentation for Cubic interpolation mode support limitations\n\n### Fixed\n- Updated behavior of `ReduceL1`, `ReduceL2`, `ReduceSumSquare` and `ReduceLogSum` operators when argument `noop_with_empty_axes` is `true` and `axes` are empty\n- `Interpolate` issue with argument `scaleFactor`\n- Fix when GPU allocations (`ComputeTensorData`) are used for tensors without a corresponding backend.\n- Prevent crashes when closing the editor while in Play mode\n- Fixed memory leak in PyTorch Import\n- Fix GPU crash on Nintendo Switch 2 for convolution with padding on GPU compute\n- Fixed issue with `Split` operator when importing `.sentis` file"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Tokenizer - All Mini LM","description":"An example of using Unity Tokenizer with a All Mini LM tokenizer configuration from Hugging Face.","displayName":"Tokenizer"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.6/manual/index.html","name":"com.unity.ai.inference","version":"2.6.0","displayName":"Sentis","description":"Sentis is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nSentis automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.dt.app-ui":"1.3.6","com.unity.collections":"2.4.3","com.unity.nuget.newtonsoft-json":"3.2.1","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"a64d05bf1b13d8bd70e279c13d2bcfe6c56da5d8","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.6.0.tgz"}},"2.5.0":{"_upm":{"changelog":"### Added\n- `PyTorch` model import\n- `LRN (Local Response Normalization)` operator implemented on all backends\n- `3D MaxPool` and `AveragePool` operators implemented on all backends\n- Sentis Importer: Allow users to specify dynamic dimensions as static on Sentis model import, same as we do for ONNX\n- Tokenizer Additions\n    - `Hugging Face` parser\n\t- Sequence decoder\n\t- Regex replace decoder\n\t- String split pre-tokenizer\n\t- Unigram Mapper\n\t- Byte-based substring feature to SubString\n\t- Padding: support \"pad multiple of\" option\n\t- Split pre-tokenizers: support \"invert\"\n\t- StripAccents normalizer\n\t- Rune split pre-tokenizer\n\t- Strip normalizer\n\t- WordLevel model\n\t- WhitespaceSplit pre-tokenizer\n\t- Metaspace pre-tokenizer and decoder\n\t- Whitespace pre-tokenizer\n\t- NMT normalizer\n\t- Punctuation pre-tokenizer\n\t- Digits pre-tokenizer\n\t- CharDelimiterSplit pre-tokenizer\n\t- BPE decoder\n\n### Changed\n- Model Visualizer: Async loading of model\n- Model Visualizer: updating com.unity.dt.app-ui to 1.3.3\n- Resize operator on CPU no longer uses main (mono) thread path\n- All model converters use switch-case instead of if-else cascade\n- Migrate Mono APIs to CoreCLR-compatible APIs\n\n### Fixed\n- Editor crash when quitting in play mode\n- Memory Leak in FuseConstantPass\n- `Clip` operator improvement: no longer need CPU fallback for min/max parameters\n- `Mod` operator fix: on some platform with float operands, could return incorrect value when one of them was 0\n- Faulty optimization pass\n- Fix in existing burst code for 2D pooling vectorization calculations\n- `TopK` issue on `GPUCompute` when dimension is specified\n- Fix source generator empty array\n- Tokenizer Fixes\n\t- Special added token decoding condition\n\t- Fix added token whole word handling\n\t- Gpt2Splitter subtring length computation\n\t- Added vocabulary pre-tokenization.\n\t- ByteLevelDecoder empty-byte guard in string generation\n\t- DefaultDecoder: joining tokens with whitespace\n\t- BPE: fix merging, applying on each word instead of the whole string\n\t- DefaultPostProcessor: apply the proper type id\n\t- RobertaPostProcessor: fix attention and type id assignment\n\t- TemplatePostProcessor: fix type id assignment\n\t- Assign default type id to sequences\n\t- Better surrogate characters support\n\t- Fix ByteFallback: inserting the right amount of \\ufffd char\n\t- Fix BertPreTokenizer\n\t- Default model determination based of chain of responsibility"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Tokenizer - All Mini LM","description":"An example of using Unity Tokenizer with a All Mini LM tokenizer configuration from Hugging Face.","displayName":"Tokenizer"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.5/manual/index.html","name":"com.unity.ai.inference","version":"2.5.0","displayName":"Sentis","description":"Sentis is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nSentis automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.dt.app-ui":"1.3.3","com.unity.collections":"2.4.3","com.unity.nuget.newtonsoft-json":"3.2.1","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"e760aa121ec7377fbb66f8b2eb6e32044929894c","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.5.0.tgz"}},"2.4.1":{"_upm":{"changelog":"### Fixed\n- Small error in documentation preventing user manual publication"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Tokenizer - All Mini LM","description":"An example of using Unity Tokenizer with a All Mini LM tokenizer configuration from Hugging Face.","displayName":"Tokenizer"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.4/manual/index.html","name":"com.unity.ai.inference","version":"2.4.1","displayName":"Sentis","description":"Sentis is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nSentis automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.dt.app-ui":"1.3.1","com.unity.collections":"2.4.3","com.unity.nuget.newtonsoft-json":"3.2.1","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"587873fd5e1bd2d10f696bceb1a83c296823e927","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.4.1.tgz"}},"2.4.0":{"_upm":{"changelog":"### Added\n- LiteRT model import\n- Tokenization API\n- STFT and DFT ONNX operators\n- BlackmanWindow, HammingWindow, HannWindow and MelWeightMatrix ONNX operators\n- BitwiseAnd, BitwiseOr, BitwiseXor, BitwiseNot ONNX operators and functional methods\n- AsStrided, Atan2, Expm1, Log10, Log1p, Log2, Rsqrt, Trunc, ReduceVariance, Diagonal layers, functional methods and optimizer passes\n- NotEqual, FloorDiv, TrueDiv layers and LiteRT operators\n\n### Changed\n- Renamed Inference Engine to Sentis in package name and documentation\n- Improved model import time for ONNX models\n- ONNX model import operator order now consistent with the original model\n- Improved optimization passes to reduce operator count in imported models\n- Improved visualizer loading times and consistency in displaying attributes\n- ScatterND operator can now run on much larger tensors, enabling new models\n- ScatterND operator now allows negative indices\n- ONNX model outputs that are not connected to any inputs are no longer incorrectly pruned\n- Improve model import warning and error display in the inspector\n\n### Fixed\n- Small errors in documentation\n- Faulty optimization passes that could lead to inference issues\n- Memory leaks on model constants\n- Non-matching ProfilerMarker calls\n- Issues in CPU callback which could lead to incorrect inference on some models\n- Enable missing modes for GridSample and Upsample operators"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Tokenizer - All Mini LM","description":"An example of using Unity Tokenizer with a All Mini LM tokenizer configuration from Hugging Face.","displayName":"Tokenizer"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.4/manual/index.html","name":"com.unity.ai.inference","version":"2.4.0","displayName":"Sentis","description":"Sentis is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nSentis automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.dt.app-ui":"1.3.1","com.unity.collections":"2.4.3","com.unity.nuget.newtonsoft-json":"3.2.1","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"8de2b073c584eaaa2d3268cfdcf61dcd2b1ef05e","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.4.0.tgz"}},"2.2.2":{"_upm":{"changelog":"### Fixed\n- Issue with incorrect TensorShape in Conv layer when dilations are greater than 1 and auto-padding is used\n- Incorrect Third Party Notices"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.2/manual/index.html","name":"com.unity.ai.inference","version":"2.2.2","displayName":"Inference Engine","description":"Inference Engine is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nInference Engine automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.collections":"2.4.3","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"3d3e6e2da2186fba0e6edae38c0d054165757750","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.2.2.tgz"}},"2.3.0":{"_upm":{"changelog":"### Added\n- Model Visualizer for inspecting models as node-based graphs inside the Unity Editor\n- Support for `Tensor<int>` input for `GatherND` operator on `GPUPixel` backend\n- Support for `Tensor<int>` input for the base of the `Pow` operator on all backends\n- Support for the `group` and `dilations` arguments for the `ConvTranspose` operator on all backends\n- Support for `value_float`, `value_floats`, `value_int` and `value_ints` values in ONNX `Constant` operators\n\n### Changed\n- Optimized single-argument operators on `CPU` backend\n- Optimized deserialization of models to avoid reflection at runtime\n\n### Fixed\n- Einsum operator now works correctly on fallback path"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.3/manual/index.html","name":"com.unity.ai.inference","version":"2.3.0","displayName":"Inference Engine","description":"Inference Engine is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nInference Engine automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.dt.app-ui":"1.3.1","com.unity.collections":"2.4.3","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"4ac711cab9a36baa41c7c65b01461f91c0537337","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.3.0.tgz"}},"2.2.1":{"_upm":{"changelog":"### Fixed\n- Issue with incorrect TensorShape in Conv layer when dilations are greater than 1 and auto-padding is used\n- Incorrect Third Party Notices"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.2/manual/index.html","name":"com.unity.ai.inference","version":"2.2.1","displayName":"Inference Engine","description":"Inference Engine is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nInference Engine automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.collections":"2.4.3","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"803814f8170853f83d849f5ae5cb46cf1b7b6f83","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.2.1.tgz"}},"2.2.0":{"_upm":{"changelog":"### Added\n- First version of Inference Engine"},"unity":"6000.0","samples":[{"path":"Samples~/Convert tensors to textures","description":"Examples of converting tensors to textures.","displayName":"Convert tensors to textures"},{"path":"Samples~/Convert textures to tensors","description":"Examples of converting textures to textures.","displayName":"Convert textures to tensors"},{"path":"Samples~/Copy a texture tensor to the screen","description":"An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen.","displayName":"Copy a texture tensor to the screen"},{"path":"Samples~/Encrypt a model","description":"Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime.","displayName":"Encrypt a model"},{"path":"Samples~/Quantize a model","description":"Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime.","displayName":"Quantize a model"},{"path":"Samples~/Read output asynchronously","description":"Examples of reading the output from a model asynchronously, using compute shaders or Burst.","displayName":"Read output asynchronously"},{"path":"Samples~/Run a model","description":"Examples of running models with different numbers of inputs and outputs.","displayName":"Run a model"},{"path":"Samples~/Run a model a layer at a time","description":"An example of using ScheduleIterable to run a model a layer a time.","displayName":"Run a model a layer at a time"},{"path":"Samples~/Use a compute buffer","description":"An example of using a compute shader to write data to a tensor on the GPU.","displayName":"Use a compute buffer"},{"path":"Samples~/Use a job to write data","description":"An example of using Burst to write data to a tensor in the Job system.","displayName":"Use Burst to write data"},{"path":"Samples~/Use tensor indexing methods","description":"Examples of using tensor indexing methods to get and set tensor values.","displayName":"Use tensor indexing methods"},{"path":"Samples~/Use the functional API with an existing model","description":"An example of using the functional API to extend an existing model.","displayName":"Use the functional API with an existing model"}],"documentationUrl":"https://docs.unity3d.com/Packages/com.unity.ai.inference@2.2/manual/index.html","name":"com.unity.ai.inference","version":"2.2.0","displayName":"Inference Engine","description":"Inference Engine is a neural network inference library. It enables you to import trained neural network models, connect the network inputs and outputs to your game code, and then run them locally in your end-user app. Use cases include capabilities like natural language processing, object recognition, automated game opponents, sensor data classification, and many more.\n\nInference Engine automatically optimizes your network for real-time use to speed up inference. It also allows you to tune your implementation further with tools like frame slicing, quantization, and custom backend (i.e. compute type) dispatching.\n\nVisit https://unity.com/ai for more resources.","dependencies":{"com.unity.burst":"1.8.17","com.unity.collections":"2.4.3","com.unity.modules.imageconversion":"1.0.0"},"dist":{"shasum":"89644c5bfb09a0e9d1728b0d21304169c7505eae","tarball":"https://download.packages.unity.com/com.unity.ai.inference/-/com.unity.ai.inference-2.2.0.tgz"}}},"time":{"2.6.1":"2026-04-02T20:36:45.716Z","2.6.0":"2026-03-30T14:13:55.414Z","2.5.0":"2026-01-29T17:38:50.034Z","2.4.1":"2025-11-03T21:12:02.967Z","2.4.0":"2025-10-30T18:30:07.386Z","2.2.2":"2025-10-28T21:13:17.057Z","2.3.0":"2025-07-30T10:10:19.479Z","2.2.1":"2025-05-28T13:30:23.110Z","2.2.0":"2025-05-15T14:10:02.121Z"},"dist-tags":{"latest":"2.6.1"},"etag":"\"9ea9-lVr17WNsHns6PCTBIF/0QgNAiZ4\""}