@sourceloop/audit-service / Exports
@sourceloop/audit-service¶
Overview¶
The @sourceloop/audit-service
is a powerful microservice specifically designed for managing audit logs. It offers extensive functionality to track and record user actions such as inserts, updates, and deletes. Built on the foundation of @sourceloop/audit-log, this service provides a repository mixin for easy integration.
While the repository mixin logs all actions by default, the audit-service takes customization to the next level. It allows you to selectively audit specific scenarios or cases, giving you complete control over the auditing process. With the service's exposed APIs, you can effortlessly insert and retrieve audited data, enabling you to tailor your auditing approach to your unique needs.
Additionally, the audit-service goes beyond basic functionality by offering an archiving feature. This feature allows you to seamlessly archive logs to an AWS S3 bucket based on specific filters. You can even retrieve logs from both the S3 bucket and the audit database simultaneously, providing a comprehensive view of your audit history.
Installation¶
Getting Started¶
You can start using @sourceloop/audit-service
in just 4 steps:
Bind Component¶
Bind the AuditServiceComponent
to your application constructor as shown below, this will load all controllers, repositories or any other artifact provided by this service in your application to use.
Set the environment variables¶
The examples below show a common configuration for a PostgreSQL Database running locally.
Configure DataSource¶
Set up a LoopBack4 Datasource with dataSourceName
property set to AuditDbSourceName
.
Migrations¶
The migrations required for this service are processed during the installation automatically if you set the AUDIT_MIGRATION
or SOURCELOOP_MIGRATION
env variable. The migrations use db-migrate
with db-migrate-pg
driver for migrations, so you will have to install these packages to use auto-migration. Please note that if you are using some pre-existing migrations or databases, they may be affected. In such a scenario, it is advised that you copy the migration files in your project root, using the AUDIT_MIGRATION_COPY
or SOURCELOOP_MIGRATION_COPY
env variables. You can customize or cherry-pick the migrations in the copied files according to your specific requirements and then apply them to the DB.
Additionally, there is now an option to choose between SQL migration or PostgreSQL migration.
NOTE : for @sourceloop/cli
users, this choice can be specified during the scaffolding process by selecting the "type of datasource" option.
Usage¶
Creating Logs¶
The logs in this service can either be created through the REST endpoint, or through a repository mixin provided with the @sourceloop/audit-log npm module. This mixin, by default, creates logs for all the inbuilt actions done through the extended repository. You can read more about how to use this package here.
All the different types of action that are logged are:
Archive Logs¶
The audit logs can be archived via the REST endpoint /audit-logs/archive
. A custom filter is provided based on which logs can be archived. Currently it supports uploading the archived logs to s3 for which you'll need to set the AWS credentials.
Archival Filter¶
Archival Response¶
You'll get a similar response like this after requesting the archival:
Here key
repesents the AWS S3 object key of the file which contains the archived logs.
Archive audit logs¶
This provides a function that is used to convert the selected data in csv format and the same is exported to AWS S3 bucket as specified in the variables AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
,AWS_REGION
,AWS_S3_BUCKET_NAME
variables of the env
file. By default file is exported in csv format on a AWS S3 bucket which can be overwritten using this provider. Value of these can be provided in the config file as well. Also it is necessary to provide the value ofPATH_TO_UPLOAD_FILES
variable in the env
file. Config file can be binded in application like
Implementation for this can be seen here
Get Logs¶
This feature is used to query the logs present in the Audit Database or the archive storage (eg. AWS S3). A sefault LoopBack filter is accepted based on which logs can be fetched. Along with this a boolean variable called includeArchivedLogs
is also provided which accepts true
or false
resulting in whether to include the archieved logs in response.
If includeArchivedLogs
is set to true
the data will be fetched from both Audit database and archive storage based on the filter provided as an input but it is not immediately returned, a jobId
is returned which represents the operation happening in the background to fetch and parse logs from archive storage. This jobId
can be used to check the status of this process and get the result when it is done.
If includeArchivedLogs
option is set to false
(which is default if not provided) the data is fetched from only Audit Database and not from the archive storage and in this case the response contains the data requested.
Export Logs¶
This feature is used to export the logs present in Audit Database or the archive storage(eg. AWS.S3). A default loopback filter is accepted based on which logs are exported to the desired location as specified as an excel file(by default). Along with this a boolean variable called includeArchivedLogs
is also provided which accepts true
or false
resulting in whether to include the archieved logs in response.
If includeArchivedLogs
is set to true
then data will be fetched from both Audit database and archive storage based on the filter provided as an input but it is not immediately returned, a jobId
is returned which represents the operation happening in the background to fetch and parse logs from archive storage. This jobId
can be used to check the status of this process and get the result when it is done.
If includeArchivedLogs
option is set to false
(which is default if not provided) the data is fetched from only Audit Database and not from the archive storage and in this case the response contains the data requested.
This feature also allows custom column names to be given to the data or clubbing of specific columns together. This can be done with the help of providers which can be overwritten as mentioned below
Create custom columns¶
This provides a function which is used to customize the column present in the original data. The names of custom columns can also be specified in the function. By default no change is made to the original data. For making the change desired changes the provider can overwritten as shown below.
Sample implementation-
The provider for this key is to create custom column and custom column names
Process audit logs¶
This provides a function that takes excel file buffer as an input and any whatever desired operation can be performed on the excel file buffer.
Implementation for this can be seen here
Environment Variables¶
Name | Required | Default Value | Description |
---|---|---|---|
NODE_ENV |
Y | Node environment value, i.e. dev , test , prod |
|
LOG_LEVEL |
Y | Log level value, i.e. error , warn , info , verbose , debug |
|
DB_HOST |
Y | Hostname for the database server. | |
DB_PORT |
Y | Port for the database server. | |
DB_USER |
Y | User for the database. | |
DB_PASSWORD |
Y | Password for the database user. | |
DB_DATABASE |
Y | Database to connect to on the database server. | |
DB_SCHEMA |
Y | public |
Database schema used for the data source. In PostgreSQL, this will be public unless a schema is made explicitly for the service. |
JWT_SECRET |
Y | Symmetric signing key of the JWT token. | |
JWT_ISSUER |
Y | Issuer of the JWT token. | |
AWS_ACCESS_KEY_ID |
N | Access key ID associated with your AWS account | |
AWS_SECRET_ACCESS_KEY |
N | Secret access key associated with your AWS account. | |
AWS_REGION |
N | Specifies the AWS region where your AWS S3 bucket is located. | |
AWS_S3_BUCKET_NAME |
N | Name of the AWS S3 bucket you want to save the archived audit logs in. | |
PATH_TO_EXPORT_FILES_FOLDER |
N | Specifies the path to store the exported files. | |
PATH_TO_UPLOAD_FILES |
N | Specifies the path to store the archived files on S3 bucket. |
Using with Sequelize¶
This service supports Sequelize as the underlying ORM using @loopback/sequelize extension. And in order to use it, you'll need to do following changes.
1.To use Sequelize in your application, add following to application.ts:
- Use the
SequelizeDataSource
in your audit datasource as the parent class. Refer this for more.
API Documentation¶
Common Headers¶
Authorization: Bearer Content-Type: application/json
in the response and in request if the API method is NOT GET.
API Details¶
Visit the OpenAPI spec docs for more details on the APIs provided in this service.
License¶
Sourceloop is MIT licensed.