Help
RSS
API
Feed
Maltego
Contact
Domain > akashtiwari.github.io
×
More information on this domain is in
AlienVault OTX
Is this malicious?
Yes
No
DNS Resolutions
Date
IP Address
2018-04-22
151.101.129.147
(
ClassC
)
2018-04-22
151.101.193.147
(
ClassC
)
2018-04-22
151.101.65.147
(
ClassC
)
2024-01-14
185.199.111.153
(
ClassC
)
2024-07-06
185.199.110.153
(
ClassC
)
2024-08-24
185.199.108.153
(
ClassC
)
Port 80
HTTP/1.1 200 OKConnection: keep-aliveContent-Length: 8783Server: GitHub.comContent-Type: text/html; charsetutf-8permissions-policy: interest-cohort()Last-Modified: Sun, 29 Jan 2017 08:54:36 GMTAccess- !DOCTYPE html>html langen-us> head> meta charsetUTF-8> title>Deduplication Algorithm/title> meta nameviewport contentwidthdevice-width, initial-scale1> meta nametheme-color content#157878> link hrefhttps://fonts.googleapis.com/css?familyOpen+Sans:400,700 relstylesheet typetext/css> link relstylesheet href/assets/css/style.css?v02e41d6b5ccb732a262aef02e40d12afb1ab1811> /head> body> section classpage-header> h1 classproject-name>Deduplication Algorithm/h1> h2 classproject-tagline>Everything can be improved. ~Clarence W. Barron/h2> a hrefhttps://github.com/AkashTiwari/AkashTiwari.github.io/zipball/master classbtn>Download .zip/a> a hrefhttps://github.com/AkashTiwari/AkashTiwari.github.io/tarball/master classbtn>Download .tar.gz/a> /section> section classmain-content> p>###What Deduplication IsAccording to wikipedia, “Data deduplication is a specific form of compression where redundant data is eliminated, typically to improve storage utilization. In the deduplication process, duplicate data is deleted, leaving only one copy of the data to be stored. However, indexing of all data is still retained should that data ever be required. Deduplication is able to reduce the required storage capacity since only the unique data is stored./p>h3 idmethods-for-deduplication-algorithm>Methods For DedupLication Algorithm/h3>ol> li>file-level Deduplication/li> li>Block-level Deduplication/li>/ol>p>strong>file-level deduplication/strong> watches for multiple copies of the same file, stores the first copy, and then just links the other references to the first file. Only one copy gets stored on the disk/tape archive. Ultimately, the space you save on disk relates to how many copies of the file there were in the file system./p>p>lets assume a company having a 1000 employee share a common file say “data.txt” which is 10MB in Size.Each employee do the same changes and save the exact similar 1000 copies of file on server.so estimated storage require to sav
Port 443
HTTP/1.1 200 OKConnection: keep-aliveContent-Length: 8783Server: GitHub.comContent-Type: text/html; charsetutf-8permissions-policy: interest-cohort()Last-Modified: Sun, 29 Jan 2017 08:54:36 GMTAccess- !DOCTYPE html>html langen-us> head> meta charsetUTF-8> title>Deduplication Algorithm/title> meta nameviewport contentwidthdevice-width, initial-scale1> meta nametheme-color content#157878> link hrefhttps://fonts.googleapis.com/css?familyOpen+Sans:400,700 relstylesheet typetext/css> link relstylesheet href/assets/css/style.css?v02e41d6b5ccb732a262aef02e40d12afb1ab1811> /head> body> section classpage-header> h1 classproject-name>Deduplication Algorithm/h1> h2 classproject-tagline>Everything can be improved. ~Clarence W. Barron/h2> a hrefhttps://github.com/AkashTiwari/AkashTiwari.github.io/zipball/master classbtn>Download .zip/a> a hrefhttps://github.com/AkashTiwari/AkashTiwari.github.io/tarball/master classbtn>Download .tar.gz/a> /section> section classmain-content> p>###What Deduplication IsAccording to wikipedia, “Data deduplication is a specific form of compression where redundant data is eliminated, typically to improve storage utilization. In the deduplication process, duplicate data is deleted, leaving only one copy of the data to be stored. However, indexing of all data is still retained should that data ever be required. Deduplication is able to reduce the required storage capacity since only the unique data is stored./p>h3 idmethods-for-deduplication-algorithm>Methods For DedupLication Algorithm/h3>ol> li>file-level Deduplication/li> li>Block-level Deduplication/li>/ol>p>strong>file-level deduplication/strong> watches for multiple copies of the same file, stores the first copy, and then just links the other references to the first file. Only one copy gets stored on the disk/tape archive. Ultimately, the space you save on disk relates to how many copies of the file there were in the file system./p>p>lets assume a company having a 1000 employee share a common file say “data.txt” which is 10MB in Size.Each employee do the same changes and save the exact similar 1000 copies of file on server.so estimated storage require to sav
View on OTX
|
View on ThreatMiner
Please enable JavaScript to view the
comments powered by Disqus.
Data with thanks to
AlienVault OTX
,
VirusTotal
,
Malwr
and
others
. [
Sitemap
]