This project has moved and is read-only. For the latest updates, please go here.

Cron Jobs

Topics: 1. General, 2. Documentation
Feb 1, 2015 at 5:20 AM
Edited Feb 1, 2015 at 5:23 AM
What is the best set of default cron jobs for SBF spot and 5min PVoutput uploads. I am using donation mode and i configure the the following so i could use the wunderweather temps;
spot.Uac1 AS V6,
spot.Udc1 AS V7,
spot.Udc2 AS V8,
spot.Pdc1 AS V9,
spot.Pdc2 AS V10,
spot.Temperature AS V11,

I am using the following to capture 1min data between 6am and 9pm and i think the uploaddaemon uploads every 5min. Running the event capture 3 times a day. Is this the best way?

*/1 6-21 * * * /usr/local/bin/sbfspot.3/SBFspot -ad0 -am0 -ae0 > /dev/null 2>&1
*/5 6-21 * * * sleep 30 && /usr/local/bin/sbfspot.3/SBFspot -ad1 -am0 -ae0 -sp0 > /dev/null 2>&1
0 10,14,20 * * * /usr/local/bin/sbfspot.3/SBFspot -ad4 -am1 -ae5 -sp0 > /dev/null 2>&1

Even with the Sleep command i still get the following in the log, what is the best way to deal with this.
[16:21:25] INFO: Uploading datapoint: 20150201,16:20,522365,1680,,,,244.856,236.158,356.476,610.2,997.4,39.3 => OK (200)
[16:21:25] ERROR: batch_set_pvoflag() returned database is locked
Feb 4, 2015 at 11:55 PM
Any thoughts on if this is the right jobs to run every day, i dont want to get too far in the future and realise i cant get some data from the inverter.

Appreciate any help.
Feb 5, 2015 at 9:56 PM
Enable verbose logging and send output to a file. Now you're looking into the dark by redirecting output to /dev/null
Feb 5, 2015 at 10:20 PM
Thanks, i can grab the logs when i need to. I would like to know if those 3 jobs are all i need to run to capture all of the SMA data ?
Feb 6, 2015 at 12:45 PM
Seems to be OK... Most people only run it every 5 minutes, even for events
Running it every minute for spotdata is a bit overkill IMO
To disable CSV output, use the -nocsv switch
Feb 7, 2015 at 1:20 PM
Edited Feb 18, 2015 at 10:25 AM
dear jpete, some remarks

be aware that each version of Operating System (OS) includes some limits on the cron options that can be used
for example: some OS, will not execute a cron job each minute (in origin a 5minute cycle was assumed to fit)

-a- when running SBFspot with a repeat interval below 80seconds
-a.1- you need to make certain that either there is NO double activation, or you detect the double activation
SBFspot includes a "lock" system that rejects multiple access on the "comms-device-interface" (for BT - IP??)
but you will not be aware of this unless you analyse the error-level or "std-out" to know about multiple activation,

to reduce the chance on "double" active's i modified the param "number of retries" to 4 - see config file
from experience: "4" is a correct lower value for BT (for ip ??)

-a.2- even then it might be that your second repeated task might get stuck

second task-> "*/5 6-21 * * * sleep 30 && /usr/local/bin/sbfspot.3/SBFspot -ad1 -am0 -ae0 -sp0 > /dev/null 2>&1"

but since a next succesfull run of that task (next 5minutes) will upload also the data for previous "dropped" run

-a.3- i'll might be wrong but for the above you need to modify the third task to:
third task-> "1 10,13,16 * * * sleep 30 && /usr/local/bin/sbfspot.3/SBFspot -ad4 -am1 -ae5 -sp0 > /dev/null 2>&1

change start at "0" minute into at "1" minute
otherwise that task will interfere at the "zero minute" at 10h00 and 14h00 and 20h00 with your first task,
when only applying "sleep 30" you will be in conflict with task2

change the hours intervals such that they fit into the active period
with your original value "20" SBFspot will skip to execute during winter period (finq)
for most locations the values 10,13,16 will match winter<->summer with operational period

-b- i suggest to modifiy your investigation for the daily file and the month file to (remember remark a.3)
0 10,13,16 * * * /usr/local/bin/sbfspot.3/SBFspot -ad4 -am2 -ae5 -sp0 > /dev/null 2>&1

-b.1- the need for "5" in the option -ae5 is not clear but it does not harm

-b.2- i do apply -am2 in my scripts
although current version of SBFspot behaves such that at first day in month the previous month is also read
i have been confronted with exceptional conditions such that -am1 was not sufficient to complete my previous month table

-c- i support the suggestion by sbf to send the output to a file

a simple method to ONLY keep valuable outputs is (but this does require the usage of a "script" and not a direct activation from cron)
if the error-level from sbfspot = 0 then do nothing with the outputs
if error-level =/= 0 then catenuate the output into a file

I do apply several controls on the "error-level" and on the "contents" of the stdout and std-err
to capture the outputs for "interesting" cases,

-d- the upload task (daemon) was not in the list of your cron tasks
a good practice for the upload is to plan it an empty time-slot in the scheme (example)
fourth task -> "*/5 6-21 * * * sleep 150 && /usr/local/bin/sbfspot.3/upload

kr wim