Posted: December 8th, 2022

In this homework assignment, you will deploy a basic data collection and data st

Place your order now for a similar assignment and have exceptional work written by our team of experts, At affordable rates

For This or a Similar Paper Click To Order Now

In this homework assignment, you will deploy a basic data collection and data structuring pipeline for the NSL-KDD Dataset.
1. Open a new Elastic Cloud Account.
https://cloud.elastic.co/registration?settings=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJsZW5ndGgiOjE1MCwic2l6ZSI6NDA5NiwiZGVmYXVsdF9zaXplIjoxMDI0fQ.dS6xqdrcNBVkANlcS19AnsZmHVSqoPROLHprdeN-Qbc&source=educationLinks to an external site.
2. Download the NSL-KDD dataset download:
https://www.unb.ca/cic/datasets/nsl.htmlLinks to an external site.
3. Pipeline Demo
nfstream.conf Download nfstream.conf
Detailed instructions and guidelines will be provided in class.
Deliverables:
1. A “main.txt” file containing your generated pipeline.
2. A small report presenting the following screenshots taken from Kibana:
a) A screenshot of the Mappings on the Index Template you created
b) A screenshot showing the “logstash-nsl-kdd” index created by your pipeline to store the data. Note that if you previously stored the logs from your my_hello_world.log into this same index, you will need to first delete the index containing the data, and then collect again your KDD dataset with Filebeat (process: stop filebeat, delete filebeat registry, restart filebeat. No need to restart logstash once you had it running successfully).
c) Create a kibana data view that is able to look at the data from logstash-nsl-kdd index and show a screenshot of the data presented in Discover when selecting that Data View.
Use the “Final project report template” for simple formatting but just make use of the cover page and the pages of the sections as needed.
3. Answer the following questions:
a) What input and filter plugins did you use to process the dataset?
b) Why did you select this plugin?
c) What alternatives did you have to process this dataset and why did you opt to use the current one?
d) After structuring your data, how did you indicate elasticsearch the data types you had in your dataset?
e) Where was your data stored in elasticsearch?
f) How many data points did you collect?
g) If you didn’t have this dataset in a file, what agents would you use to collect the same or similar data?
4. Submit your report in PDF format.

For This or a Similar Paper Click To Order Now

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00