Patch-free-Processing StatementP-f-P Main PageSETI@home Quotations
_][_
[_Overview_]
[_<<_|_>>_]

SETI@home StatisticsCITY@home StatisticsThe @ FilesStories and SETI@home ArticlesP-f-P DownloadsRelated LinksSETI@home Website
 


Advanced Search...

©
2001.07.30
[)/\§|\|||\||}{

SETI@home Top 2%
 




Quotations
April 2000

 


These quotations come from postings made on usenet...

Name - Date in sci.astro.seti (s.a.s.)/alt.sci.seti (a.s.s.)/other, topic: quoted text
« prev ] 2000.04 [ next »

This month's most interesting messages:

All from the SETI@home team members at Berkeley. Non-official statements but interesting anyway.

Hiram Clawson - 2000.04.04 in s.a.s., server health:

You can get an idea of the health of the server by noting the amount of WUs turned in during the past 24 hours.
From http://setiathome.ssl.berkeley.edu/stats/totals.html indicates during the past 24 hours, some 566,000 WUs have been successfully returned, which is normal. If there was a problem, you would be in massive company and the alarm bells would be ringing in Berkeley. There are currently several mechanisms in place to monitor the health of the server and the crew is alerted immediately.

Eric J. Korpela - 2000.04.11 in s.a.s., gaussian fitting:

The client uses the data in the workunit header to determine the appropriate gaussian width (and whether to do a gaussian fit at all). (It basically uses the beam_width/angle_range*workunit_duration).

Hiram Clawson - 2000.04.13 in s.a.s., caching on UNIX:

Running a cache on UNIX is trivial if you can handle a couple of lines of shell code. Please note FetchCache and RunCache examples at:
http://setiathome.ssl.berkeley.edu/faq.html#q1.16

Matt Lebofsky - 2000.04.28 in s.a.s., splitting the tapes:

We have as much as six splitters working at any given time, all of which need to be constantly fed. I usually throw newer data tapes into the drives (in chronological order). Occasionally I'll put some not-so-recent data which I didn't split before based on the fact it may be a half-filled tape or might possibly contain too much RFI. And recently, I keep one splitter busy with really old tapes (in this case, from December '98). We avoided splitting these tapes before since the data was really old, even when we launched the project, and we kept getting complaints from people wanting "fresher" data.
But it's all good.

In any event, we haven't split a single tape more than once yet.

I recently updated the server status page so you can see exactly which tapes are currently being split, in case you care:

http://setiathome.berkeley.edu/sstatus.html

Added: green - Snipped: [ >8 ]