Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Nemo

Pages: [1] 2 3 ... 8
1
Unsorted (deutsch) / IPv6-taugliche P2P-Software?
« on: February 02, 2011, 09:50:23 PM »
Hallo zusammen

Ich war gerade bei einem IPv6-Vortrag, bei dem anschliessend der Vortragende seinen IPv6-Tunnel im WLAN freigegeben hat. Er schaffte es sogar, dass die Anwesenden beim Test unter http://test-ipv6.com/ die volle Punktzahl bekamen (somit direkt erreichbar aus dem Internet).  ;D Irgendwie hat jedoch das IPv6-Internet noch nicht so viel zu bieten, da ist es schön, wenn parallel noch die IPv4-Sache läuft.

Leider muss jede Software angepasst werden, um IPv6-tauglich zu werden...  :P Das sieht nicht gut aus für ältere oder eingeschlafene P2P-Projekte... Aber zum Glück wird man noch einige Zeit lang parallel dazu IPv4 haben, damit Millionen von Hardwarekomponenten nicht sofort Elektroschrott werden und das Internet nicht aus den Fugen gerät.  ;D

Wisst ihr, welche P2P-Software bereits IPv6-tauglich ist? Einerseits ist es nämlich erschreckend, andererseits aber auch faszinierend, wenn man mit diesen Tunneln global erreichbare IPv6-Adressen auf die Rechner in irgendwelche IPv4-Netzwerke holt. Ein riesiges Netzwerk ohne NAT, so wie das Internet früher einmal war! Das schreit doch geradezu nach P2P-Netzen...  :D


MfG,
Nemo.

2
Ein kleines Softwareprojekt ohne offenem Quellcode wollte die Iranischen Blogger vor deren Staat schützen, nun werden schwerwiegende Sicherheitsmängel vermutet und von der Benutzung des verteilten Proxy-Netzwerks wird abgeraten:

Artikel auf Heise Newsticker:
Anti-Zensur-Projekt Haystack wegen kritischer Fehler gestoppt

Beschreibung auf Wikipedia:
Haystack (software)


Tja, manche lernen es nie: Im Bereich Datenschutz, Datensicherheit, Netzwerksicherheit und ähnlichem empfiehlt es sich, auf Open Source zu setzen...  ;D

MfG,
Nemo.

3
Freenet 0.7 (Opennet/Darknet) / February 4. 2010: Freenet Status Update
« on: February 05, 2010, 09:44:05 PM »
Freenet keeps growing! It's impressive how many Freenet nodes exist on the network...  :o

Quote
Von:    Matthew Toseland <toad@amphibian.dyndns.org>
An:    Discussion of development issues <devl@freenetproject.org>, support@freenetproject.org
Betreff:    [freenet-dev] Freenet Status Update
Datum:    04.02.2010 23:35:30 (Thu, 4 Feb 2010 22:35:30 +0000)


BUILD 1240

Our last stable build, 1239, was in November. We have just released a new one, 1240. This has many changes (opennet stuff, optimisations, all sorts of stuff), which I list in the mail about it. One of the most important is that there are several new seednodes, and many dead ones have been removed. I have tested it 3 times today and it's bootstrapped fast each time, although yesterday it bootstrapped very slowly one time.

NETWORK STATUS AND NETWORK STATISTICS

Evan Daniel has been doing some useful work analysing the network. Amongst other things, he has discovered that:
- The Guardian article, in December, which was reprinted around the world, has more than doubled the size of our network, although there is a slight downward trend now. This may be due to seednodes issues and not having had a build since November.
- We have around 4500-7000 nodes online at any given time.
- Over 5 days, we have around 14000 non-transient nodes.
- For nodes online at any one time, roughly 37% are 24x7 nodes (96% uptime average), 33% are regular users (56% average uptime), and 30% are occasional or newbie nodes (16% average uptime).


EMU IS DEAD, LONG LIVE OSPREY

We have finally gotten rid of emu! Our faithful and powerful dedicated server supplied at a discount by Bytemark is no more. We now have a virtual machine called Osprey, which does most of the same job, for a much lower cost, and has a much simplified setup so should be easier to maintain. We have tried to outsource services, for example we use Google Code for our downloads, but some things will have to stay under our direct control for some time to come e.g. mailing lists and the bug tracker.

You may have some difficulty with the update scripts, if you use update.sh / update.cmd. If it doesn't work, try updating the script manually from https://checksums.freenetproject.org/latest/update.cmd (or update.sh)

WOT, FREETALK, RELATED THINGS AND OTHER PLUGINS

Xor (also known as p0s) continues to work on the Web of Trust and Freetalk plugins. These are approaching the point where we can make them loadable from the plugins page, and then bundle them, enabled by default.

WoT is the backend system which implements a pseudonymous web of trust, which functions in a similar way to that in FMS. You can create identities, assign trust to other identities, announce your identity via CAPTCHAs and so on. This is the Community menu, from which you can see your identities and other people's, and the trust relationships between them. WoT is used by Freetalk, FlogHelper, and probably soon by distributed searching, real time chat and other things.

Freetalk is a spam-resistant chat system based on WoT. This is similar to FMS, but it will eventually be bundled with Freenet, and will be a part of it by default. You will be able to embed a Freetalk board on your freesite. FlogHelper is a WoT-based plugin for writing a flog (freenet blog), which is very easy to use, but uses WoT to manage identities. I would have bundled FlogHelper months ago, but WoT isn't ready yet and FlogHelper needs it.

WoT should be ready soon. Recently a major issue has been discovered with the trust calculation algorithm, after that is fixed and some minor issues, WoT will become a semi-official plugin, which will sadly require flushing the existing "testing" web of trust, so sadly all old messages and identities will go away. Freetalk needs more work, about 50% of the bugs marked for 0.1 on the roadmap are fixed at the moment.

In build 1240, we pull in a new version of Library. This is a great improvement over the old version, it is faster, it supports embedding a search on a freesite, and has many bugs fixed. However searching for common terms can still cause out of memory crashes.

There is another issue with Library: infinity0 spent last summer creating a scalable index format for Library, which should make it a lot easier to insert and maintain big indexes. We will soon change the spider to use this new format, and in the process we expect to greatly improve performance for writing indexes, so it doesn't take a week any more and is done incrementally. I realise this has been promised before, but it is important, so it will happen sooner or later, hopefully sooner.

Full Web of Trust-based distributed searching, with a focus on filesharing, is on the distant horizon at the moment. infinity0 might be able to do some work on it as part of his studies, we'll see. It won't be in 0.8.0.

PRIORITIES AND RELEASES

We would like to get 0.8 out soon, or at least a beta of 0.8. Several major issues:
- The windows installer needs to be fixed on 64-bit. This is being worked on.
- Freetalk must be ready.
- Auto-configuration of memory limits in the installers, and asking the user about memory usage (at least in some cases) is relatively easy and important, but not vital.
- Substantial improvements to opennet, particularly making nodes announce onto the network and get where they should be as quickly as possible.
- Substantial improvements to data persistence. We have done much here already but there is more to do.
- Library must work well and fast out of the box. This means amongst other things the new spider mentioned above.
- MANY BUG FIXES! The first beta does not need to be perfect, but there are some critical issues that need dealing with, such as the fact that nodes often don't resume properly after being suspended for a while.

Please test Freenet, and report any bugs and usability issues you find on the bug tracker ( https://bugs.freenetproject.org/ ) or via Freetalk board en.freenet (note that this will be wiped soon so if after a new Freetalk release it is wiped you may need to resend).

OPENNET IMPROVEMENTS

We have many ideas on how to improve opennet bootstrapping (make nodes assimilate into the network more quickly), and to improve opennet generally. Some of these are implemented in 1240, including many bugfixes. More will be put out over time so we can see their impact. Improving opennet should improve performance for the majority of users who don't run 24x7 and it should improve performance for everyone else too, as those nodes will get connected and start doing useful work more quickly.

DATA PERSISTENCE

We have many ideas on how to improve data persistence. There is a lot of capacity on the network, yet data seems to become inaccessible quite quickly (stats below). I am convinced that improving data persistence will improve Freenet's usability and perceived performance immensely. The continued popularity of insert on demand on uservoice demonstrates this as much as anything: People want a system that works! IMHO we can greatly improve things without resorting to insert on demand, although filesharing clients based on distributed searching may eventually offer it (but there are serious security issues with insert on demand).

Evan is convinced that mostly poor data persistence is not due to data falling out of stores, but due to the small number of nodes that stored the data (as opposed to caching it) going offline or becoming unreachable. We have increased the number of nodes that store data, we have made the node use the store for caching if there is free space, we have done various things aimed at improving data persistence, and there is much more we can do. An immediate question is whether the security improvements gained last year by not caching at high HTL have broken many inserts by making them not get cached on the right nodes; we will test this in 1241. A related question is why inserting the same key 3 times gives such a huge performance gain relative to inserting it once; we will investigate this soon after. We will probably triple-insert the top blocks of splitfiles soonish, but the bigger prize is to achieve the 90%+ success after a week that we see with triple-insertion of a single block, and this may well be possible with some changes to how inserts work...

Finally, the redundancy in the client layer could be a lot smarter: We divide files up into groups of 128 blocks, called segments, and then add another 128 "check blocks" for redundancy. Unfortunately this means that sometimes the last segment only has 1 block and 1 check block, and so is much less reliable than the rest of the splitfile. We will fix this.

We have been collecting statistics on data retrievability over time. The below are "worst case" in that they relate to single CHK blocks, with no retries. Real life, with many retries (at least 2 for a direct fetch and more if the file is queued), and with large, redundant splitfiles, should be substantially better than these numbers. Every day we insert 32 blocks and fetch a bunch of 32 blocks from 1 day ago, 3 days ago, 7 days ago, etc. There are two of these running to get more data, so I am just showing both results here. The percentages are the proportion of the original insert that is still retrievable:
1 day   76% / 77%
3 days  66% / 70%
7 days  60% / 61%
15 days 48% / 48%
31 days 36% / 33%
63 days 21% / 19%

Now, here's an interesting one. In each case we insert a 64KB CHK splitfile - that is, one block at the top and four underneath it. We insert one three times, and we insert three different ones once each. We then pull them after a week. We can therefore compare success rates for a single block inserted once, a single block inserted 3 times, and a simulated MHK, that is, a block which has been re-encoded into 3 blocks so that we fetch all of them and if any of them succeeds we can regenerate the others.

Total attempts where insert succeeded and fetch executed: 63
Single keys succeeded: 61
MHKs succeeded: 58
Single key individual fetches: 189
Single key individual fetches succeeded: 141
Success rate for individual keys (from MHK inserts): 0.746031746031746
Success rate for the single key triple inserted: 0.9682539682539683
Success rate for the MHK (success = any of the 3 different keys worked): 0.9206349206349206

USER INTERFACE AND USABILITY

Ian's friend pupok is working on a new AJAXy user interface mockup for Freenet. sashee's web-pushing branch, which makes the user interface a lot more dynamic without making it look much difference, should be merged soon, but turned off by default, since it has some nasty bugs. When it is turned on, it solves the age-old parallel connections bug, showing individual progress for each image without hogging your browser's limited number of connections (6 or 8 on modern browsers). Both of these may miss 0.8.

More broadly on usability, usability testing is always welcome: Persuade a friend to install Freenet, watch them do it, don't help them unless they get really stuck, report any problems they have or any comments they make about how it could be better.


Greetings,
Nemo.

4
Freenet 0.7 (Opennet/Darknet) / Freenet 0.7.5 stable build 1240 released
« on: February 05, 2010, 09:40:32 PM »
Freenet is not dead! An eMail from the Freenet developer and support mailinglists:

Quote
Von: Matthew Toseland <toad@amphibian.dyndns.org>
An: Discussion of development issues <devl@freenetproject.org>, support@freenetproject.org
Datum: 04.02.2010 20:18:57 (Thu, 4 Feb 2010 19:18:57 +0000)
Betreff: [freenet-dev] Freenet 0.7.5 build 1240

Freenet 0.7.5 build 1240 is now available. It will be mandatory on Wednesday and contains many important fixes, 3 months work in fact. Many people have contributed to this and my contribution has been less than it usually would be for various reasons. The auto-update system should fetch the new build shortly, let us know if it does not work. If you have to use the update scripts (update.sh or update.cmd), please note that they may not work perfectly on old installs where sha1test.jar doesn't exist or is out of date; you may need to update the script manually from https://checksums.freenetproject.org/latest/update.cmd (or .sh). This is related to us getting rid of emu and getting a new server, currently https://downloads.freenetproject.org/ and https://emu.freenetproject.org/ don't work and we're deciding what to do about them.

Major changes:
- Significant bugfixes and other improvements to opennet. Hopefully this will improve general performance, but particularly bootstrapping and reconnecting.
- Plugins are now translateable, each has its own separate translation page. I am not sure whether translation override files work for plugins at the moment so be careful to save your changes before restarting.
- Fix a datastore bug related to our recent change to use the store as extra space for the cache.
- Various minor and (relatively) major optimisations and memory usage improvements.
- Show the time that a download/upload last had anything happen.
- New version of Library with many improvements. It will still cause the node to crash with Out of Memory sometimes on searches for popular words.
- True plugin auto-updating, allowing us to deploy new versions of plugins in between stable build releases, with all the appropriate checks for security, compatibility etc.
- Many new seednodes, and some old ones that weren't working any more removed. Your node will automatically fetch and use the new seednodes over Freenet.

THANKS TO:

artefact2
bombe
evanbd
infinity0
juiceman
saces
sdiz
TheSeeker
toad
xor

Greetings,
Nemo.

5
Off-Topic / 26C3: "Here Be Dragons"
« on: December 23, 2009, 07:50:28 PM »
Hello Hackers, Nerds, Geeks, Freaks, whatever!

A quote from http://events.ccc.de/congress/2009/wiki/Welcome
Quote
The 26th Chaos Communication Congress (26C3) is the annual four-day conference organized by the Chaos Computer Club (CCC). It takes place from December 27th to December 30th 2009 at the bcc Berliner Congress Center in Berlin, Germany.

The Congress offers lectures and workshops on a multitude of topics and attracts a diverse audience of thousands of hackers, scientists, artists, and utopians from all around the world. The 26C3s slogan is "Here Be Dragons".

It's my tradition to go to Berlin between Christmas and New Year's Eve.  ;D

If you don't want to go to Berlin, it's possible to look the lectures live via webstreaming (details should be published somewhere in the congress Wiki), either at home or at an alternate location (Because the Berliner Congress Center is always very crowded they start this year with a distributed congress: "Dragons everywhere" is a list of locations with public viewing of the webstreams in realtime).

After the congress there will be video recordings of the talks for downloading, and after some weeks there will be official recordings (Recordings of the last year 25C3). Just in case you want to re-watch a talk, or you don't want to watch it in realtime.  :)


I'm shure there are some talks which might interest you, just look on this list of lectures.
And by the way: The congress has always a very fast Internet connection with public IP addresses for it's participants. Perhaps you will see increased network performance in your P2P or anonymous P2P network, or find fast public FTP-servers... Congress participants have many ideas for using this fat Internet pipe. Just look in the congress Wiki for information updates.  ;D


Greetings,
Nemo.

6
Boardcafe / Artikel "Totalitäre Open Source Entwicklung"
« on: July 22, 2009, 08:22:50 AM »
Auf Heise habe ich einen lesenswerten Artikel gefunden, der von den Gepflogenheiten bei grossen und kleinen Open Source Projekten handelt. Der Artikel beschreibt meines Erachtens die Situation ganz gut.

Und falls jemand mit der Entwicklung von StealtNet oder anderen Open Source Projekten nicht glücklich ist, sollte er diesen Artikel ASAP lesen und nachvollziehen.  ;)

http://www.heise.de/open/Die-Woche-Totalitaere-Open-Source-Entwicklung--/artikel/142116


MfG,
Nemo.

7
Quote from http://freenetproject.org
Quote
The Freenet Project is very pleased to announce the release of Freenet 0.7.5.

Freenet is free software designed to allow the free exchange of information over the Internet without fear of censorship, or reprisal. To achieve this Freenet makes it very difficult for adversaries to reveal the identity, either of the person publishing, or downloading content. The Freenet project started in 1999, released Freenet 0.1 in March 2000, and has been under active development ever since.

Freenet is somewhat unusual in that you can publish content to Freenet, and then disconnect from the network. This content will remain available to other Freenet users, although it may eventually be deleted if nobody is interested in it. Freenet will copy and move the content around the network according to demand, making it very difficult for an adversary to remove content. Freenet will automatically create more copies of popular content to ensure that it will always be available.

Freenet 0.7 introduced the "darknet" concept, allowing users to only connect to their trusted friends (and through them to their friends' friends, and the entire network), greatly reducing their vulnerability to attack. You can use Freenet even if you don't know any other Freenet users, it just won't be as secure.

Freenet 0.7.5 features major improvements to performance and usability, as well as improvements to security and robustness. In particular:

    * Freenet now uses a database to store longer-term data that must survive a restart. This increases Freenet's speed and reduces its memory usage. In particular, you can now have almost any number of downloads and uploads in progress without worrying about memory usage.
    * Improvements to the web interface make it clearer what you can do with Freenet, show progress when loading a page or file will take more than a few seconds, integrate search into the browse page, and generally improve usability in many areas.
    * Significantly improved performance for inserting and retrieving files and especially pages, and also for Freenet's initial connection to the network.
    * A new installer for Windows which works with Vista as well as Windows XP/2000 (Freenet also works on Mac and Linux systems).
    * Many other optimizations.
    * Lots and lots of bug fixes!

There are versions of Freenet 0.7.5 for Windows, Mac, and Linux. They can be downloaded from:

http://freenetproject.org/download.html

If you have any difficulty getting Freenet to work, or any questions not answered in the  faq, please join us on IRC in the #freenet channel at irc.freenode.net, or email the support mailing list. If you have any suggestions for how to improve Freenet, please visit our uservoice page.

There is a lot of work still to do on Freenet, particularly when it comes to ease of use. If you have Java programming or web design skills, or would like to help translate Freenet into your own language, and would like to help us improve Freenet, please join our development mailing list and introduce yourself.

Try it! Freenet has now much improvements compared to the versions some years ago!

Greetings,
Nemo.

8
Zitat von http://freenetproject.org
Quote
The Freenet Project is very pleased to announce the release of Freenet 0.7.5.

Freenet is free software designed to allow the free exchange of information over the Internet without fear of censorship, or reprisal. To achieve this Freenet makes it very difficult for adversaries to reveal the identity, either of the person publishing, or downloading content. The Freenet project started in 1999, released Freenet 0.1 in March 2000, and has been under active development ever since.

Freenet is somewhat unusual in that you can publish content to Freenet, and then disconnect from the network. This content will remain available to other Freenet users, although it may eventually be deleted if nobody is interested in it. Freenet will copy and move the content around the network according to demand, making it very difficult for an adversary to remove content. Freenet will automatically create more copies of popular content to ensure that it will always be available.

Freenet 0.7 introduced the "darknet" concept, allowing users to only connect to their trusted friends (and through them to their friends' friends, and the entire network), greatly reducing their vulnerability to attack. You can use Freenet even if you don't know any other Freenet users, it just won't be as secure.

Freenet 0.7.5 features major improvements to performance and usability, as well as improvements to security and robustness. In particular:

    * Freenet now uses a database to store longer-term data that must survive a restart. This increases Freenet's speed and reduces its memory usage. In particular, you can now have almost any number of downloads and uploads in progress without worrying about memory usage.
    * Improvements to the web interface make it clearer what you can do with Freenet, show progress when loading a page or file will take more than a few seconds, integrate search into the browse page, and generally improve usability in many areas.
    * Significantly improved performance for inserting and retrieving files and especially pages, and also for Freenet's initial connection to the network.
    * A new installer for Windows which works with Vista as well as Windows XP/2000 (Freenet also works on Mac and Linux systems).
    * Many other optimizations.
    * Lots and lots of bug fixes!

There are versions of Freenet 0.7.5 for Windows, Mac, and Linux. They can be downloaded from:

http://freenetproject.org/download.html

If you have any difficulty getting Freenet to work, or any questions not answered in the  faq, please join us on IRC in the #freenet channel at irc.freenode.net, or email the support mailing list. If you have any suggestions for how to improve Freenet, please visit our uservoice page.

There is a lot of work still to do on Freenet, particularly when it comes to ease of use. If you have Java programming or web design skills, or would like to help translate Freenet into your own language, and would like to help us improve Freenet, please join our development mailing list and introduce yourself.

Ein Test lohnt sich, es hat sich in den letzten Jahren einiges getan! Man könnte sagen, "Totgesagte leben länger".  :)

MfG,
Nemo.

9
Auf dem Heise Newsticker gibt es einen Artikel zum Thema Wie Staaten die Blogosphäre kontrollieren wollen:

Auszug aus dem Artikel:
Quote
Auf zwei Wegen versuchen Machthaber, das Netz zu kontrollieren: Entweder sie bringen lästige Autoren zum Schweigen –oder sie sperren deren Seiten. "Technisch am weitesten fortgeschritten ist die chinesische Regierung", sagt Le Coz. Fast 40.000 Staatsdiener seien damit beauftragt, das Internet und seine rund 300 Millionen Nutzer im Lande zu überwachen.

Die Internetnutzer und Datenschützer schauen jedoch nicht untätig zu, sondern umgehen mit diversen Tools die Zensursperren:
(Auszug aus dem Artikel)
Quote
Die Blogosphäre hat jedoch ihre eigenen Kniffe, um die staatlichen Schraubstöcke zu lockern. Projekte wie "Jedermanns Handbuch zur Umgehung von Internetzensur" oder die Software Psiphon, entwickelt von einer Forschungsgruppe an der Universität von Toronto, sollen Bloggern Zugang zu gesperrten Seiten verschaffen. Organisationen wie die amerikanische Electronic Frontier Foundation setzen sich für das Recht von Bloggern auf freie Meinungsäußerung ein und geben Tipps, wie Internetnutzer ihre Identität verschleiern können.

Hat da schon jemand Erfahrungen mit einem der im PDF erwähnten Tools gemacht? (Ok, TOR und I2P will ich jetzt nicht hören, die sollten eigentlich schon ausreichend bekannt sein  ;D)


MfG,
Nemo.

10
Ich habe vor kurzem die neue StealthNet-Version 0.8.6.1 getestet und mich gefragt, was den da in der wachsenden StealthNet-Suchdatenbank gesammelt wird. Gesagt, getan; ich habe ein kleines "Quick and Dirty"-Tool programiert, welches den Inhalt der Suchdatenbank ausliest und exportiert.  ;D

Als Ausgabeformate stehen zur Zeit eine Liste von Dateinamen (dies ist das Standardverhalten), eine komplette CSV-Tabelle mit allen Daten (Option "--csv"), eine HTML-Webseite mit allen Daten in einer Tabelle (Option "--htmlcomplete") und eine HTML-Webseite mit einer Tabelle der wichtigsten Einträge und anklickbaren StealthNet-Links (Option "--html") zur Auswahl.
Die Ausgabe erfolgt auf die Standardausgabe der Konsole, somit leitet man die Ausgabe sinnvollerweise in eine Datei um (deshalb der Name "Cat" in Anlehnung ans Linux Tool "cat"; vergleichbar mit dem Windows-Pendant "type").


Mehr zum Tool sowie das zip-File mit der ausführbaren Datei und allen notwendigen Entwicklungsdateien inkl. Quellcode findet ihr im englischsprachigen Posting zum Thema "CatSearchDB - Exports data from StealthNet's Search Database".

MfG,
Nemo.

11
I tried the new StealthNet 0.8.6.1 under Windows XP and I wondered what's the content of the increasing StealthNet Search Database. That's the reason why I wrote a quick and dirty console application.  ;D

If you can handle the Linux command "cat" or the Windows command "type", then you already know the usage of my little tool...

Quote from README.txt:
Quote
CatSearchDB - Exports data from StealthNet's Search Database

Nemo, 22.4.2009

The executable is in "CatSearchDB\bin\Release". You get some help and usage advice with the command "CatSearchDB.exe --help".

Sourcecode and all relevant project files are provided, use/improve/share it! I used "Microsoft Visual C# 2008 Express Edition" for Development under Windows XP. It runs on .NET 2.0 and should run on Mono 2.0.

Have fun! Hack the Planet! ;-)


Quote from console:
Quote
CatSearchDB --help

CatSearchDB v0.1 for StealthNet 0.8.6.1 by Nemo
22.4.2009, based on StealthNet source code
This is a quick and dirty program, use it at your own risk, donït expect correct functionality or further development.
WARNING: Stop StealthNet first before running this tool!
Known problems: strange filenames could destroy the HTML or CVS table. And big SearchDB-Files lead to huge HTML or CSV files. :-)



CatSearchDB works like 'cat' and extracts the content of your StealthNet Search Database (file 'searchdb.dat') to StdOut. Simply put it into the 'preferences' folder of StealthNet and run it on the command line. When started without options it will extract only filenames in a list.

Usage: 'CatSearchDB >Outputfile' or something like 'CatSearchDB | sort >Outputfile'

       Options:

                 -h --help /h /?  this text

                 --html           export as HTML stealthnet:// link-website

                 --htmlcomplete   export everything as HTML (no links)

                 --csv            export everything as CSV data

The zip file is attached to this posting. Comments are welcome, but don't expect much development on this tool.
And don't sue me if this tool ate your cat or any other damage...  :P

Greetings,
Nemo.

12
Hallo miteinander

Ich bin der Idee nachgegangen, ob man die Dateien im StealthNet-Netzwerk ins eigene Dateisystem einhängen könnte. So könnte man Mediendateien direkt abspielen ohne dass man sich um den Download der Datei kümmern muss, oder man kopiert (bzw. downloaded) Dateien mit dem Dateimanager auf die lokale Festplatte, oder man macht eine Slideshow mit Bilddateien, oder man durchsucht StealthNet so wie man es lokal gewohnt ist (auch eine Dateiindizierung à la Google Desktop Search wäre denkbar)...  ;D

Dabei bin ich auf Mono.Fuse gestossen, damit müsste man für Linux in C# einen StealthNet-Client programmieren können, der die verfügbaren Dateien mittels FUSE als Dateisystem ins System einbindet. Das Dateisystem wäre dann nur readonly, da man andere freigegebene Dateien nicht verändern kann.
Als Liste der verfügbaren Dateien könnte man die StealthNet-Suchdatenbank verwenden, wobei sicherlich viele Dateien nur sporadisch wirklich zur Verfügung stehen...  ::)
Es wären verschiedene Filter denkbar, um z.B. in einem Unterverzeichnis "./recentFiles" nur vor kurzem gefundene Dateien anzuzeigen, in "./highAvailableFiles" nur Dateien mit mehr als XYZ Quellen anzuzeigen, in Verzeichnissen "./nodeXYZ" die freigegebenen Dateien dieses Knotens anzuzeigen, ....  :D

Es wäre eine interessante Erweiterung der umfangreichen Liste von FUSE-basierten Dateisystemen.
Doch ziemlich sicher ist die Verzögerungszeit zu gross um z.B. Filme direkt aus StealthNet abzuspielen, da ja jedes gewünschte Dateistück aus StealthNet geladen werden muss. Oder beim Öffnen eines Verzeichnisses lesen Dateimanager unter Linux die ersten paar Bytes von jeder Datei zur Erkennung des Dateityps, dies geht mit StealthNet als Datenquelle auch nur mit starker Verzögerung...  ::)
Ich weiss nicht ob Applikationen mit derart stockenden Dateisystemen umgehen können... Mit schlauen "Read-Ahead"- und Caching-Massnahmen könnte man sicherlich die Sache etwas optimieren...


Just my two cents.

MfG,
Nemo.

13
RShare/StealthNet (deutsch) / passive Suche, passive Downloads. Sinnvoll?
« on: December 09, 2008, 09:50:51 AM »
Hallo zusammen,

mir ist in Zusammenhang mit MUTE ein altes Projekt namens NapShare in den Sinn gekommen. Ursprünglich war es ein Client für Gnutella-Netzwerke, nun ein MUTE-Client.

Quote
NapShare is a fully automated, multi network P2P client made to run 24/7 unattended.

Searching and automatic downloading happen without any user intervention. You supply a list of keywords and filters for the file types you want and it downloads overnight, automatically, also sharing whatever it gets. The automated "brain" tries to simulate searching and downloading like a human would.

Take a nap while it does the work!


So etwas ähnliches kann ich mir in StealthNet auch vorstellen. Ein unbeaufsichtigter Knoten arbeitet im P2P-Netzwerk mit und leitet Gigabyte-weise Daten von fremden Downloads weiter. Ausserdem ziehen tausende Suchanfragen und -Resultate im Klartext vorbei. Obwohl rechtlich etwas riskant, wären folgende zwei Funktionen denkbar:

-passive Suche: Der Nutzer gibt Suchbegriffe ein. StealthNet schickt keine Suchanfragen los, sondern präsentiert übereinstimmende Suchresultate anderer Knoten im "Passivsuch Tab" und schickt sie wie üblich weiter an den Empfänger. So kriegt man über längere Zeit Suchresultate, ohne dass man aktiv danach suchen muss (quasi der "StealthModus" für die Suche).

-passiver Download: Da man ja sowieso Gigabyte-weise Daten weiterschaufelt, könnte man sich ja eine Kopie der Daten anlegen, an denen man ebenfalls interessiert ist. Angenommen, ich interessiere mich für Mp3s des Musikers XYZ (als Backup der gekauften CD  ;D), so könnte StealthNet beim Durchreichen der Daten eine lokale Kopie machen. Dies ist sicherlich je nach Dateiinhalt rechtlich brisant, aber man kann ja löschen was einem nicht gefällt. Da man mit diesem "StealthModus" für Downloads ziemlich sicher niemals alle Dateistücke erwischt, müsste man halt nach z.B. einer Woche für die fehlenden Dateistücke eine richtige Suche einleiten.


Anmerkungen? Sinnvoll oder nicht? Verbesserungen?

MfG,
Nemo.

14
Freenet 0.5 / Freenet 0.5 inproxy via I2P
« on: October 24, 2008, 09:41:41 AM »
I just found an inproxy for surfing on Freenet 0.5! I didn't know that there's still activity on this network (successor Freenet 0.7 is actively maintained; Freenet 0.5 is no more available on www.freenetproject.org ; many Freenet client applications died or were ported to the new Freenet Client Protocol 2.0 implemented in Freenet 0.7).



You need to have I2P running for surfing to this inproxy:
http://fproxy.tino.i2p/servlet/nodeinfo/

(I remember that Tino serves an inproxy for I2P from Internet, so it should be possible to surf on Freenet 0.5 via WWW->I2P->Freenet0.5...)



It's still amazing that there's still an active Freenet 0.5 Freesite index:
http://fproxy.tino.i2p/SSK%40y~-NCd~il6RMxOe9jjf~VR7mSYwPAgM,ds52dBUTmr8fSHePn1Sn4g/OneMore//

Quote
Index Generated: 2008-09-28 13:21:35 GMT
Total Sites: 718



The Freesite index finds still new or updates Freesites:
http://fproxy.tino.i2p/SSK%40y~-NCd~il6RMxOe9jjf~VR7mSYwPAgM,ds52dBUTmr8fSHePn1Sn4g/OneMore//index-new.html


Who says Freenet 0.5 is dead? It's still working...  ;)

Is there anyone who can tell a bit about the remaining activities in Freenet 0.5? Where to get the latest stable build and up to date seednodes for Freenet 0.5?

Greetings,
Nemo.

15
RShare/StealthNet (deutsch) / Analyse von StealthNet
« on: September 11, 2008, 05:33:55 PM »
Da bei StealthNet viele Diskussionen über mögliche Sicherheitslücken im Sand verlaufen und viele grundlegende Themen entweder ignoriert oder im Hintergrund abgehandelt werden, will ich mir nun selber ein Bild von den netzwerkinternen Abläufen machen. Letzte Woche habe ich mir via SVN den Quellcode der Version 0.8.3.1 heruntergeladen und den Quellcode etwas überflogen.

Eigentlich wollte ich mit MonoDevelop unter Ubuntu Linux arbeiten, doch offenbar hat Mono unter Ubuntu Hardy Probleme beim compilieren von internen anonymen Klassen. Zum Glück habe ich noch eine Maschine mit Windows XP, da habe ich mir die freie Microsoft Entwicklungsumgebung Visual C# Express 2008 installiert. Ich habe vor ein paar Jahren mal C und Java gelernt, nun muss ich sagen, dass dieses C# gar nicht so fremd ist. ;)


Lange Rede kurzer Sinn: Ich habe eine "Extended Logging" StealthNet-Version erzeugt, die bei jedem Empfang oder Versand eines StealthNet-Commands über den bestehenden Logger die wichtigsten Netzwerkdaten in die Logdatei schreibt. Dazu habe ich in den Command*.cs-Dateien Änderungen gemacht, ausserdem habe ich eine Methode zum Logging von Hex-Werten in die Logger-Klasse eingebaut. Es werden keine IP-Adressen geloggt, sondern nur die StealthNet-netzinternen Commands.

Nach sechs Tagen andauerndem Betrieb (habe keine Suche ausgeführt und keine Dateien freigegeben; Knoten lief hinter einer NAT ohne Portforwarding) sind nun 1GB an Logdateien zusammen in folgender Form:
Code: [Select]
00:02:30: Recv: Command21, CommandID=43ADF0248397E1252A4B9B0FF74AE7161F0C9FD7620906AEDC9B41DD53329D45248A466D1D97B8F3DB1440079D518C07, Reserved=0, HopCount=2, SenderPeerID=FCE993319FED3D9175C6DBE2A4CC8A466E7003CFD29E315607191214923047ACC75BC8DB387231C9859502A64963F5A7, SearchID=D17F44685A9D01383A3DA9DC0E6F3B590E75EA5C9A079420DFF96472DC5851A53BD9953D3C3B0F6A3B15F159B1B62BC2, SearchPattern=acronis*
00:02:30: Recv: Command60, CommandID=931BD2F1129C3FB15A543005E37A99ED0453308A966C0F360D5627F8735D423C97F387F2411637DEF1323A3C99DA4344, FloodingHash=92EBD8D1C7A5C0015DE3D208F25C4049B977D37132E962E6832C8995C4CDD525D7F9BBF4867AA0AD08B905154F298FBF, SenderPeerID=230C4EAB22A176956390E26778987A49450448E0D150F6BBBA70E69B5198A0F2E1A4656FBF41C5979082BB3AD17099E3, SourceSearchID=286DC69840BE69E59F26138B830525A0983F47DC01550966E1E4C2E9245543CDE012088F4984F7C5DFAD8DD9751B787E, HashedFileHash=ED1FC28BD97653C78CC092863EB235EB7BFB60DA447EC7B42272FE05A8AD8469399B44453ECB8690E264857C6403F97A0E0AB5716B5C2F4287C74087DFB8ACD2
00:02:30: Sent: Command60, CommandID=931BD2F1129C3FB15A543005E37A99ED0453308A966C0F360D5627F8735D423C97F387F2411637DEF1323A3C99DA4344, FloodingHash=8647CE878E7FCB3219371F9E75E33C9FC8A58D1C6A2C57BBC8D4D762F048B274580F35E9666D6E5CBD820B2CE5CB51F6, SenderPeerID=230C4EAB22A176956390E26778987A49450448E0D150F6BBBA70E69B5198A0F2E1A4656FBF41C5979082BB3AD17099E3, SourceSearchID=286DC69840BE69E59F26138B830525A0983F47DC01550966E1E4C2E9245543CDE012088F4984F7C5DFAD8DD9751B787E, HashedFileHash=ED1FC28BD97653C78CC092863EB235EB7BFB60DA447EC7B42272FE05A8AD8469399B44453ECB8690E264857C6403F97A0E0AB5716B5C2F4287C74087DFB8ACD2
00:02:30: Recv: Command60, CommandID=931BD2F1129C3FB15A543005E37A99ED0453308A966C0F360D5627F8735D423C97F387F2411637DEF1323A3C99DA4344, FloodingHash=D287E26B2B042A215FAD9F4765B407B99E6900BB6D12EF2716AD930505F93606E6E17ECBC26723B7120206FD9E2EFF4B, SenderPeerID=230C4EAB22A176956390E26778987A49450448E0D150F6BBBA70E69B5198A0F2E1A4656FBF41C5979082BB3AD17099E3, SourceSearchID=286DC69840BE69E59F26138B830525A0983F47DC01550966E1E4C2E9245543CDE012088F4984F7C5DFAD8DD9751B787E, HashedFileHash=ED1FC28BD97653C78CC092863EB235EB7BFB60DA447EC7B42272FE05A8AD8469399B44453ECB8690E264857C6403F97A0E0AB5716B5C2F4287C74087DFB8ACD2
00:02:31: Recv: Command41, CommandID=703EA64C8A9AB9C3B9B080F379651CCCF8F8F20932A3A559D06C3FE8100117585D099D2619798E240D57CD45511F5AEF, SenderPeerID=9EBD11549F6BCE69B3B49E525297B272806156309AC340221856F438CC034B72BC88A9B35B1E4EBD3601333F844BF4A5, ReceiverPeerID=E4A0C469DF2E875885015C54CE0C17FD3DCC7A1A1F7213E8E7851DFCCF7B73CA94A5F69A07A890E55D9F59A7CBB6505B, DownloadID=33835FB099F775998F9871B8C54EB9AB60CB159F4E13D797C9B1391EAF3E924AA62A5861184A10F98A04FA8B72725B57, QueuePosition=0
00:02:31: Recv: Command61, CommandID=475A9B2768C8E035C4BB1C7F35F2CFC7A3A96B3A4A21B055F94B28FD643DC55C0A943059872BF64541E15667E5C52BCE, Reserved=0, HopCount=9, SenderPeerID=71AC999C3912155C19DA4B547300ED37D9CAEA027ADFAD102042B677BDC54CD5558AA4C3BA4BDEF45945B200703BC003, SourceSearchID=774E79871629E03C45F646D4C4C38A1D40BF7C18A0B4C255D8C0B59D84480302FDFAF5650216A32CEC1C7C9F7713E7E4, HashedFileHash=5006161F1926C362E7268AD8CF96A7CE8FA0DB03648BD8F9D7917D8597A925EB689B4F8C51607195DEC4F2066048947CFA24BEDB514735DE7B7445E96A26514B
00:02:31: Recv: Command42, CommandID=BA770E8748CDE43FDBA65F43958DF25E492CA085B976154F1A277770F7241B28B614DFAA957FD4C0C9035AA38775F1C0, SenderPeerID=94FDDF661BD239AB82C2569177EA2E71DC89C203FD7A5371CA62B42F5C4E6EF06E1AA6C294BFD51825753119EF42329F, ReceiverPeerID=2B4733CE26439B1D6ECD0475ABD04CDEB11AB96594E0A7F7C1615B17BC2B153A08132976FD1BA59118CF7995997ED88F, DownloadID=5E719CB83F14ECF3644FB569EF28428225E16E99E65A8984EC1566351574BEBBDEEAF85CB37B9A40AB4B4835876BDB68, Sector=4390
00:02:31: Recv: Command60, CommandID=931BD2F1129C3FB15A543005E37A99ED0453308A966C0F360D5627F8735D423C97F387F2411637DEF1323A3C99DA4344, FloodingHash=D287E26B2B042A215FAD9F4765B407B99E6900BB6D12EF2716AD930505F93606E6E17ECBC26723B7120206FD9E2EFF4B, SenderPeerID=230C4EAB22A176956390E26778987A49450448E0D150F6BBBA70E69B5198A0F2E1A4656FBF41C5979082BB3AD17099E3, SourceSearchID=286DC69840BE69E59F26138B830525A0983F47DC01550966E1E4C2E9245543CDE012088F4984F7C5DFAD8DD9751B787E, HashedFileHash=ED1FC28BD97653C78CC092863EB235EB7BFB60DA447EC7B42272FE05A8AD8469399B44453ECB8690E264857C6403F97A0E0AB5716B5C2F4287C74087DFB8ACD2

Heute habe ich nun in MonoDevelop ein Quick-and-Dirty-Tool namens "LogAnalyzer" geschrieben, welches aus diesen Daten ein paar interessante Statistiken erstellt. Die Logfiles hatte ich auf dem Memory-Stick, das Tool benötigte acht Minuten zur Analyse der Daten.  ;D

Ich muss sagen, die Funktionsweise des RShare-Protokolls ist nicht so einfach nachvollziehbar. Bei Gelegenheit muss ich mir mal ein Zustandsdiagramm dazu zeichnen um die Sache genauer zu analysieren.
Mein derzeitiger Eindruck: Spoofing von Paketen, bei denen mein Knoten nicht auf der Route zwischen Sender und Empfänger liegt, ist praktisch unmöglich. Die Pakete sind mit langen zufällig generierten ID-Werten versehen und können praktisch nicht erraten werden. Ich nehme an, dass Pakete mit ungültigen SearchID/DownloadID-Werten verworfen werden und somit laufende Suchen oder Downloads nicht durch x-beliebige Peers gestört werden können.
Jeder Knoten, der die Pakete durchleitet, kann sie auch manipulieren (Man-in-the-Middle-Attacke). Dies muss bei einem anonymen P2P-Netzwerk in Kauf genommen werden. Es erhärtet sich jedoch der Verdacht, dass der Empfänger von "Dateisektoren" die Korrektheit nicht überprüfen kann, da der SectorHash offenbar im gleichen Paket mitgeschickt wird. So wäre es durchaus möglich, dass ein Knoten bei der Durchleitung den "Dateisektor" ersetzt und den dazugehörenden SectorHash korrigiert, und der Empfänger merkt es erst beim Gesamthash der Datei (siehe mein Posting zum Thema Download corrupt).


Falls jemand daran interessiert ist, welche Daten in den letzten sechs Tagen über meinen Knoten gingen, der Report ist diesem Posting angehängt. Ausserdem ist die besagte Extended Logging Version von StealthNet 0.8.3.1 mitsamt Quellcode in einem RAR-Archiv sowie das besagte Quick-And-Dirty-Tool namens "LogAnalyzer" als ZIP angehängt. Use it at your own risk and have fun!  ;)

Folgende Punkte des "LogAnalyzer"-Reports finde ich interessant (es könnten sich auch um Bugs in meinem Tool handeln...):
-gegen zwei Millionen Commands sind von meinem Knoten empfangen worden, jedoch nur 120'000 wurden abgeschickt. Dabei hatte ich keine freigegebenen Dateien und keine Suche benutzt...  ::)
-es gab ca. 21'000 Sender von Paketen und ca. 12'000 Empfänger von Paketen. Ich nehme mal an, dass wenige Teilnehmer einen Grossteil des ganzen Contents haben und die Downloader somit vielfach in einer Warteschlange stecken und immer wieder mal nachfragen müssen.
-mein Knoten hat ca. 64'000 Dateistücke erhalten (und ich nehme an vollzählig an die Peers durchgeschleust), bei 32kB Grösse komme ich auf etwa 2GB Daten, die als reine Dateistücke über meinen Knoten gingen. Gemäss meinen obigen Schilderungen befürchte ich, dass mein Knoten diese ohne weiteres hätte manipulieren können...  :o
-Es gibt viele Netzwerk-Commands-Typen, die von meinem Knoten nicht mehr weitergeleitet wurden. Dies finde ich erstaunlich, da ich ja lokal keinen Traffic erzeugt habe und somit der Knoten auch kein Datenempfänger sein sollte. Ich müsste ich mich mal in den Quellcode einlesen, um die Funktion der etwa zwei Dutzend verschiedenen Netzwerkcommands herauszufinden.  ??? (da ich keine Doku dazu gefunden habe, geht es in die Richtung Reverse-Engineering...  :()


MfG,
Nemo.

Pages: [1] 2 3 ... 8