Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 114/123 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • Batch file recursively find files and rar them

    - by b1gf00t
    Hi there, I have a Parent Directory which hosts many sub directories, and in every sub directory there is .mpg movies. Some of the directories might contain one or more .mpg movies. I would like to automate the process below, which I have been doing manually. Step One If the directory has more than 1 .mpg file, I create separates directories for each and move each file into its directory, naming the directory as per the name of the file. Step Two I rar each video file in its directory as per one of my profiles, by that it splits the movie into 50MB parts, test the archive, delete the source, and instructs winrar to wait if another rar is executing. I am doing this so I can queue jobs manually. Step Three After having all the rars in the sub directories, I start creating a checksum for every directory, therefore leaving checksum.sfv in every directory. Step Four I copy the parent folder and its sub directories to my external drives. I was hoping that someone could assist me in creating a script. I was able to automate the process of creating directories as per the name of the file, and moving the file. However, I never succeeded in automating Step two. I am using the below software Winrar from rarlabs exf from exactfile Appreciate your assistance.

    Read the article

  • What's the difference between UI development and front-end development?

    - by Nick Lowman
    I'm a front-end developer and really enjoy jQuery and JavaScript. I've built a lot a websites, done some good jQuery work and built a few JavaScript based applications and would really like to get in UI development. Or so I thought. I guessed it would be pretty similar to what I already do except maybe a little more JavaScript heavy but when I looked into it all the job specs said I needed to know about Scrum or Agile development, knowledge of testing frameworks and a good knowledge of JavaScript frameworks and custom events. So, from the specs I get the idea that a UI developer is actually a dedicated JavaScript developer. Is that the case? I understand (with much help from the users on stackoverflow), about JavaScript OO, inheritance, closures, custom events, debugging in Firefox or Aptana etc, and the people I work with seem to think I pretty OK at what I do but clearly my knowledge is not good enough to go for UI jobs. If anyone could tell me a little more about UI development and if there are any good resources for learning about it I would be most grateful as I couldn't find much on the internet.

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • Weird random application hang problem

    - by haridsv
    I am trying to understand an application hang problem that started up lately on my windows xp system. The system runs fine for days (sometimes) without ever shutting down or putting it to sleep, but the problem first shows up as one of the apps hanging. The application's UI stops responding or one or more background threads hang, so even though the GUI is responding, it is not doing anything (e.g., in VirtualDub's case, the UI responds fine, but the job doesn't progress and I won't even be able to abort it). The weirdness part comes from the fact that if I try to kill such an app, the program that is used to kill it goes into the same mode (i.e, that hangs instead of the original). E.g., if I use Process Explorer to kill it, the original program exits, but procexp now hangs. If I use another instance of procexp to kill the one that is hanging, this repeats, so there is always at least one program hanging in that state. This is not specific to procexp, I tried the native task manager and even the "End Process" dialog from windows explorer that shows up when you try to close a non-responsive GUI (in this case, the explorer itself hangs). The only program that didn't hang after the kill, is the command line taskkill. However, in this case, explorer hangs instead of taskkill. Also, once this problem starts manifesting, it soon ends up freezing the whole system to the extent that even a clean shutdown is not possible, so I have learned to reboot as soon as I notice this problem, however this is very inconvenient, as I often have encoding batch jobs going on which can't continue the job after the restart. The longer I leave the system running after seeing this problem, the more applications get into this state. I have tried to do a repair install but that didn't make any difference. I also uninstalled some of the newer installs, but again no difference. I tried to search online, but got inundating results for generic hang and crash related problems. Though I couldn't notice any pattern, it seems as though the problem is more frequent if I have some video encoding going on at that time. I had the system running for days when I only do browsing and internet audio/video chat before I decide to start encoding something and the problem starts to show up. I am not too sure if it is the encoding program that first hangs, though I almost always noticed that too hanging (like the VirtualDub stopping to make progress). I also had to reboot 3 times on one day when I was heavily experimenting with encoding. I would appreciate any help in narrowing down this problem and save me the trouble of reinstalling. I don't especially want to loose my gotd installs.

    Read the article

  • Can't set up printing from Mac OS X (10.5.7) to an HP PSC 2410 shared from PC running Ubuntu 9.10

    - by Weston C
    I've got an HP PSC 2410 printer shared from a fresh Ubuntu 9.10 installation. I'm able to send documents to this printer over the network from another Ubuntu machine. But so far, I haven't been able to find a setup where I can send documents to that printer from a MacBook running 10.5.7. On the Mac side, when setting things up, I go into System Prefs Print & Fax, click on the "+" mark, select "IP", pick "IPP", enter the IP address of the Ubuntu box, leave the queue blank, enter the Name and location, and I think it's when I get to the "Print Using" (driver selection) part that I'm running into issues. If I use "Auto Select", it defaults to "Generic PostScript Printer", which I doubt the PSC 2410 is (and sure enough, if I print, the jobs don't go through). If I try "Select a driver to use...", there's not an option for an HP PSC 2400. This seems a little odd: I can plug the printer directly into one of our Macs and it immediately figures out the driver and I can print no problem, but that's apparently the way things work. So, that leaves one option: "Other", which, when selected, brings up a dialog apparently for the purpose of manually locating a driver. I've tried visiting HP's web site. They have drivers for earlier versions of Mac OS X, but state that after 10.4, Mac OS X should just come with the relevant drivers. I've also tried setting things up by interacting with the CUPS server on the Mac through a browser: I go to http://localhost:631/, select "Add New Printer", pick "Internet Printing Protocol (http)" for the Device selection, enter "http://ubuntu.machine.ip.address:631/printers/hp-psc-2400-series" for the Device URI, select "HP" for Make, and then on the next screen, we're back to the problem where the PSC 2400 just doesn't show up. There's an option to "provide a PPD file", which I assume would be the printer driver I can't find. A Google search for "HP PSC 2410 ppd Leopard" doesn't seem to yield much other than a reminder that the printer is supposed to just work out of the box on Leopard. A local search for ".ppd" or "2410" on either Mac also doesn't yield anything that looks like a relevant print driver. I'm totally stuck at this point. Any advice?

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • Compiling PHP with GD crashes with EXC_BREAKPOINT (SIGTRAP) on PPC Mac

    - by Ömer
    First of all, I should say that I have searched the whole Internet for this problem but I couldn't find any solution yet. I have a Mac mini PowerPC (PPC) and I run Apache webserver (httpd-2.2.22) with PHP (5.4.0) and I do all the configure & compilation jobs by myself. If configure with: './configure' '--prefix=/usr/local/php5' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--with-config-file-path=/etc' '--with-zlib' '--with-zlib-dir=/usr' '--with-openssl=/usr' '--without-iconv' '--enable-exif' '--enable-ftp' '--enable-mbstring' '--enable-mbregex' '--enable-sockets' '--with-mysql=/usr/local/mysql' '--with-pdo-mysql=/usr/local/mysql' '--with-mysqli=/usr/local/mysql/bin/mysql_config' '--with-apxs2=/usr/local/apache2/bin/apxs' '--with-mcrypt' then the PHP works flawlessly. But if I add the GD module by adding these to the script above: '--with-gd' '--with-jpeg-dir=/usr/local/lib' '--with-freetype-dir=/usr/X11R6' '--with-png-dir=/usr/X11R6' '--with-xpm-dir=/usr/X11R6' the PHP gets configured and compiled without any errors but it causes EXC_BREAKPOINT (SIGTRAP) (see the Crash Reporter log below) when I request a page which calls PHP module. It's obvious that something related to the GD module is causing this, probably FreeType module because it's present in the log but it may not be definite of course. When the PHP crashes (or more accurately, httpd) the CPU goes 100% for 10 to 15 seconds until it recovers. I need to use the GD module and keep the Mac mini PowerPC. So, what should I do to solve this problem? Process: httpd [79852] Path: /usr/local/apache2/bin/httpd Identifier: httpd Version: ??? (???) Code Type: PPC (Native) Parent Process: httpd [79846] Date/Time: 2013-11-04 15:44:28.444 +0200 OS Version: Mac OS X 10.5.8 (9L31a) Report Version: 6 Anonymous UUID: 0178B7F8-2241-43F7-A651-9E7234D41A37 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x0000000093c11e0c Crashed Thread: 0 Application Specific Information: *** single-threaded process forked *** Thread 0 Crashed: 0 com.apple.CoreFoundation 0x93c11e0c __CFRunLoopFindMode + 328 1 com.apple.CoreFoundation 0x93c13d88 CFRunLoopAddSource + 276 2 com.apple.DiskArbitration 0x901a6e8c DAApprovalSessionScheduleWithRunLoop + 52 3 ...ple.CoreServices.CarbonCore 0x9512e67c _FSGetDiskArbSession(__DASession**, __DAApprovalSession**) + 540 4 ...ple.CoreServices.CarbonCore 0x9512e420 CreateDiskArbDiskForMountPath(char const*) + 84 5 ...ple.CoreServices.CarbonCore 0x9512d2c8 FSCacheableClient_GetVolumeCachedInfo(char const*, statfs const*, CachedVolumeInfo*, __DADisk*, __DADisk**) + 280 6 ...ple.CoreServices.CarbonCore 0x9512cca4 MountVolume(char const*, statfs*, unsigned char, unsigned char, __DADisk*, short*) + 352 7 ...ple.CoreServices.CarbonCore 0x9512ca48 MountInitialVolumes() + 172 8 ...ple.CoreServices.CarbonCore 0x9512c4d4 INIT_FileManager() + 164 9 ...ple.CoreServices.CarbonCore 0x9512c390 GetRetainedVolFSVCBByVolumeID(unsigned long) + 48 10 ...ple.CoreServices.CarbonCore 0x9512adf4 PathGetObjectInfo(char const*, unsigned long, unsigned long, VolumeInfo**, unsigned long*, unsigned long*, char*, unsigned long*, unsigned char*) + 184 11 ...ple.CoreServices.CarbonCore 0x9512acc4 FSPathMakeRefInternal(unsigned char const*, unsigned long, unsigned long, FSRef*, unsigned char*) + 64 12 libfreetype.6.dylib 0x0070a0fc FT_New_Face_From_Resource + 56 13 libfreetype.6.dylib 0x0070a3b0 FT_New_Face + 48 14 libphp5.so 0x0118d1a8 fontFetch + 824 15 libphp5.so 0x0118edac php_gd_gdCacheGet + 220 16 libphp5.so 0x0118d6d8 php_gd_gdImageStringFTEx + 360 17 libphp5.so 0x011763c0 php_imagettftext_common + 1504 18 libphp5.so 0x01176494 zif_imagefttext + 20 19 libphp5.so 0x014b9c68 zend_do_fcall_common_helper_SPEC + 1048 20 libphp5.so 0x01452898 _ZEND_DO_FCALL_SPEC_CONST_HANDLER + 440 21 libphp5.so 0x014ba878 execute + 776 22 libphp5.so 0x013f190c zend_execute_scripts + 316 23 libphp5.so 0x013779f4 php_execute_script + 596 24 libphp5.so 0x014bbe64 php_handler + 1972 25 httpd 0x000020c0 ap_run_handler + 96 26 httpd 0x00006ae0 ap_invoke_handler + 224 27 httpd 0x000305c4 ap_process_request + 116 28 httpd 0x0002c768 ap_process_http_connection + 104 29 httpd 0x00012d30 ap_run_process_connection + 96 30 httpd 0x00012ecc ap_process_connection + 92 31 httpd 0x000373e4 child_main + 1220 32 httpd 0x000376a8 make_child + 296 33 httpd 0x000377e4 startup_children + 100 34 httpd 0x000387d4 ap_mpm_run + 3988 35 httpd 0x0000a320 main + 3280 36 httpd 0x000019c0 start + 64

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Mailer issue, PHP values do not change

    - by Roland
    I have a script that runs once every month and send out stats to clients, now the stats are displayed in normal text and in the shape of a Pie Graph, now if I run the script mannually from the command line all info on the graphs are correct, but when the cron job executes the script the values for the first client are displaying on the graphs of all clients. but the text is correct. I'm using domDocument to build the HTML and PHPMailer to send out the email with the Graphs embedded into the mail also use pChart to generate the Graph My code that generates the PIE graph is below include_once "pChart.1.26e/pChart/pData.class"; include_once "pChart.1.26e/pChart/pChart.class"; // Dataset definition unset($DataSet); $DataSet = new pData; $DataSet->AddPoint(array($data['total_clicks'],$remaining),"Serie1"); if($remaining < 0){ $DataSet->AddPoint(array("Clicks delivered todate","Clicks remaining = 0"),"Serie2"); }else{ $DataSet->AddPoint(array("Clicks delivered todate","Clicks remaining"),"Serie2"); } $DataSet->AddAllSeries(); $DataSet->SetAbsciseLabelSerie("Serie2"); // Initialise the graph $pie = new pChart(492,292); $pie->drawBackground(255,255,254); $pie->LineWidth = 1.1; $pie->Values = 2; // $pie->drawRoundedRectangle(5,5,375,195,5,230,230,230); //$pie->drawRectangle(0,0,480,288,169,169,169); $pie->drawRectangle(5,5,487,287,169,169,169); $pie->loadColorPalette('pChart.1.26e/color/tones-3.txt',','); // Draw the pie chart $pie->setFontProperties("pChart.1.26e/Fonts/calibrib.ttf",18); $pie->drawTitle(140,33,"Campaign Overview",0,0,0); $pie->setFontProperties("pChart.1.26e/Fonts/calibrib.ttf",11); $pie->drawTitle(343,125,"Total clicks : ".$total_clicks,0,0,0); $pie->setFontProperties("pChart.1.26e/Fonts/calibri.ttf",10); if($remaining < 0){ $pie->setFontProperties("pChart.1.26e/Fonts/calibrib.ttf",10); $pie->drawTitle(260,250,"Campaign over-delivered by ".substr($remaining,1)." clicks",205,53,53); $pie->setFontProperties("pChart.1.26e/Fonts/calibri.ttf",10); } $pie->drawPieLegend(328,140,$DataSet->GetData(),$DataSet->GetDataDescription(),255,255,255); $pie->drawPieGraph($DataSet->GetData(),$DataSet->GetDataDescription(),170,150,130,PIE_VALUE,FALSE,50,30,0); $pie->Render("generated/3dpie.png"); unset($pie); unset($DataSet); $mail->AddEmbeddedImage("/var/www/html/stats/generated/3dpie.png","5"); I just can't understand why this only happens when the cronjob runs?

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • UNIX Question to b answered??? [closed]

    - by Nits
    Create a tree structure named ‘training’ in which there are 3 subdirectories – ‘level 1’,’ level2’ and ‘cep’. Each one is again further divided into 3. The ‘level 1’ is divided into ‘sdp’, ‘re’ and ‘se’. From the subdirectory ‘se’ how can one reach the home directory in one step and also how to navigate to the subdirectory ‘sdp’ in one step? Give the commands, which do the above actions? How will you copy a directory structure dir1 to dir2 ? (with all the subdirectories) How can you find out if you have the permission to send a message? Find the space occupied ( in Bytes) by the /home directory including all its subdirectories. What is the command for printing the current time in 24-hour format? What is the command for printing the year, month, and date with a horizontal tab between the fields? Create the following files: chapa, chapb, chapc, chapd, chape, chapA, chapB, chapC, chapD, chapE, chap01, chap02, chap03, chap04, chap05, chap11, chap12, chap13, chap14, and chap15. With reference to question 7, What is the command for listing all files ending in small letters? With reference to question 7, What is the command for listing all files ending in capitals? With reference to question 7, What is the command for listing all files whose last but one character is 0? With reference to question 7, What is the command for listing all files which end in small letters but not ‘a’ and ‘c’? In an organisation one wants to know how many programmers are there. The employee data is stored in a file called ‘personnel’ with one record per employee. Every record has field for designation. How can grep be used for this purpose? In the organisation mentioned in question 12 how can sed be used to print only the records of all employees who are programmers. In the organisation mentioned in question 12 how can sed be used to change the designation ‘programmer’ to ‘software professional’ every where in the ‘personnel’ file Find out about the sleep command and start five jobs in the background, each one sleeping for 10 minutes. How do you get the status of all the processes running on the system? i.e. using what option?

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • Is it possible to create a service like Feed My Inbox on my own server?

    - by Mark Bowen
    I was just wondering if it's at all possible to create a service like Feed My Inbox on my own server using PHP? Basically I have a site which has RSS feeds which are dynamic in nature and can search from thousands of posts based on many different criteria. I have the RSS feed working fine and bringing back data dynamically for whatever criteria I want so that bits fine. I am using the ExpressionEngine CMS to handle the site and there will be thousands of users on the site (currently there are around 2,0000) but that number is exponentially growing every single day. What I want to be able to do is allow the users to choose from certain criteria which will then build a dynamic RSS URL which will then be stored in a database table (one row for each user). This bit I will be able to do myself but then I want to be able to send out new RSS feed items via e-mail to each user. This is the part I'm a little stuck on. I'm guessing I would somehow need to run a cron job to hit a page which would check each users RSS feed and then if there are new items to send them to the user via e-mail. That's where I am totally stuck though and I'm just wondering what the best way to go about it would be? That or any software in PHP that already does this sort of thing would be great. I tried out phpList but it has severe problems working with RSS and I only ever got it to work once and now never again and I've read that lots of people have had this same problem so unfortunately it's not just me :-( I know there are services such as Feed My Inbox which I could easily set up so that users click a link and their RSS feed URL is added to go and use that service but I want to keep users from seeing the dynamic nature of the feed or they will easily be able to modify it to get at other items in the feed. I need this so that I can charge for access to the feeds but if people can see the URL of the feed then I will be totally unstuck as they will be able to get at whatever they want very easily. Therefore I'd like to be able to send the items out to them. Would really love to hear if anyone knows if this kind of thing is possible at all and what would be involved?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • What "Drained" mean in this llstatus result?

    - by xslittlegrass
    I'm trying to run jobs on the cluster of our university, and I come across this result when I trying to see the status of the cluster: llstatus Name Schedd InQ Act Startd Run LdAvg Idle Arch OpSys pandora001 Down 0 0 Run 29 41.04 9999 POWER7 AIX61 pandora002 Down 0 0 Busy 32 32.06 9999 POWER7 AIX61 pandora003 Down 0 0 Drned 0 0.03 9999 POWER7 AIX61 pandora004 Down 0 0 Busy 32 32.07 9999 POWER7 AIX61 pandora005 Down 0 0 Busy 32 32.02 9999 POWER7 AIX61 pandora006 Down 0 0 Busy 32 34.01 9999 POWER7 AIX61 pandora007 Down 0 0 Busy 32 32.02 9999 POWER7 AIX61 pandora008 Down 0 0 Drned 0 0.00 9999 POWER7 AIX61 pandora1 Avail 15 5 Idle 0 0.00 86 POWER7 AIX61 llstatus -R Machine Consumable Resource(Available, Total) ------------------------------ ------------------------------------------------- pandora001 ConsumableCpus< 28-31 >< 0-31 > ConsumableMemory(28.297 gb,124.000 gb)+ pandora002 ConsumableCpus< >< 0-31 > ConsumableMemory(14.625 gb,124.000 gb)+ pandora003 ConsumableCpus< 0-31 >< 0-31 > ConsumableMemory(124.000 gb,124.000 gb)+ pandora004 ConsumableCpus< >< 0-31 > ConsumableMemory(28.000 gb,124.000 gb)+ pandora005 ConsumableCpus< >< 0-31 > ConsumableMemory(28.000 gb,124.000 gb)+ pandora006 ConsumableCpus< >< 0-31 > ConsumableMemory(14.625 gb,124.000 gb)+ pandora007 ConsumableCpus< >< 0-31 > ConsumableMemory(5.250 gb,124.000 gb)+ pandora008 ConsumableCpus< 0-31 >< 0-31 > ConsumableMemory(124.000 gb,124.000 gb)+ pandora1 ConsumableCpus< 0-7 >< 0-7 > ConsumableMemory(32.000 gb,32.000 gb) It seems that the pandora003 and pandora008 are in the status of "Drned". What does that mean? Why I can't use the resources of at thoes nodes? Thanks.

    Read the article

  • Nginx + Nagios : 502 Bad gateway

    - by MrROY
    I have a fully new install nagios, but I can't access to it. Here's my Nginx config: server{ listen 80; server_name 61.148.45.10; # blahblah # Nagios Monitoring location /nagios3/ { proxy_pass http://127.0.0.1:80; } } Nagios is installed step by step(From this Linode guide): sudo apt-get install -y nagios3 Then I try to visit http://ip-address/nagios3/, but it shows 502 bad gateway. How do I deal with this ? This is my /var/log/syslog: Oct 25 14:18:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;1;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:19:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;1;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:19:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;2;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:20:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;2;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:20:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;3;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;3;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:21:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;HARD;4;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server nagios3: SERVICE NOTIFICATION: root;localhost;Disk Space;WARNING;notify-service-by-email;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server postfix/pickup[24474]: 4F89F394034C: uid=109 from=<nagios> Oct 25 14:21:17 my-server postfix/cleanup[27756]: 4F89F394034C: message-id=<20131025062117.4F89F394034C@my-server> Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: from=<nagios@[email protected]>, size=594, nrcpt=1 (queue active) Oct 25 14:21:17 my-server postfix/local[27758]: 4F89F394034C: to=<root@localhost>, relay=local, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: removed Oct 25 14:22:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;HARD;4;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server nagios3: SERVICE NOTIFICATION: root;localhost;HTTP;WARNING;notify-service-by-email;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server postfix/pickup[24474]: 219CA3940381: uid=109 from=<nagios> Oct 25 14:22:07 my-server postfix/cleanup[27756]: 219CA3940381: message-id=<20131025062207.219CA3940381@my-server> Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: from=<nagios@[email protected]>, size=605, nrcpt=1 (queue active) Oct 25 14:22:07 my-server postfix/local[27758]: 219CA3940381: to=<root@localhost>, relay=local, delay=0.12, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: removed Oct 25 14:39:01 my-server CRON[28242]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) And there're lot of 127.0.0.1 visit in nginx log, but I actually visit from a external ip: 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36"

    Read the article

  • Ask How-To Geek: Learning the Office Ribbon, Booting to USB with an Old BIOS, and Snapping Windows

    - by Jason Fitzpatrick
    You’ve got questions and we’ve got answers. Today we highlight how to master the new Office interface, USB boot a computer with outdated BIOS, and snap windows to preset locations. Learning the New Office Ribbon Dear How-To Geek, I feel silly asking this (in light of how long the new Office interface has been out) but my company finally got around to upgrading from Windows XP and Office 2000 so the new interface it totally new to me. Can you recommend any resources for quickly learning the Office ribbon and the new changes? I feel completely lost after two decades of the old Office interface. Help! Sincerely, Where the Hell is Everything? Dear Where the Hell, We think most people were with you at some point in the last few years. “Where the hell is…” could possibly be the slogan for the new ribbon interface. You could browse through some of the dry tutorials online or even get a weighty book on the topic but the best way to learn something new is to get hands on. Ribbon Hero turns learning the new Office features and ribbon layout into a game. It’s no vigorous round of Team Fortress mind you, but it’s significantly more fun than reading a training document. Check out how to install and configure Ribbon Hero here. You’ll be teaching your coworkers new tricks in no time. Boot via USB with an Old BIOS Dear How-To Geek, I’m trying to repurpose some old computers by updating them with lightweight Linux distros but the BIOS on most of the machines is ancient and creaky. How ancient? It doesn’t even support booting from a USB device! I have a large flash drive that I’ve turned into a master installation tool for jobs like this but I can’t use it. The computers in question have USB ports; they just aren’t recognized during the boot process. What can I do? USB Bootin’ in Boise Dear USB Bootin’, It’s great you’re working to breathe life into old hardware! You’ve run into one of the limitations of older BIOSes, USB was around but nobody was thinking about booting off of it. Fortunately if you have a computer old enough to have that kind of BIOS it’s likely to also has a floppy drive or a CDROM drive. While you could make a bootable CDROM for your application we understand that you want to keep using the master USB installer you’ve made. In light of that we recommend PLoP Boot Manager. Think of it like a boot manager for your boot manager. Using it you can create a bootable floppy or CDROM that will enable USB booting of your master USB drive. Make a CD and a floppy version and you’ll have everything in your toolkit you need for future computer refurbishing projects. Read up on creating bootable media with PLoP Boot Manager here. Snapping Windows to Preset Coordinates Dear How-To Geek, Once upon a time I had a company laptop that came with a little utility that snapped windows to preset areas of the screen. This was long before the snap-to-side features in Windows 7. You could essentially configure your screen into a grid pattern of your choosing and then windows would neatly snap into those grids. I have no idea what it was called or if was anymore than a gimmick from the computer manufacturer, but I’d really like to have it on my new computer! Bend and Snap in San Francisco, Dear Bend and Snap, If we had to guess, we’d guess your company must have had a set of laptops from Acer as the program you’re describing sounds exactly like Acer GridVista. Fortunately for you the application was extremely popular and Acer released it independently of their hardware. If, by chance, you’ve since upgraded to a multiple monitor setup the app even supports multiple monitors—many of the configurations are handy for arranging IM windows and other auxiliary communication tools. Check out our guide to installing and configuring Acer GridVista here for more information. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Download the New Year in Japan Windows 7 Theme from Microsoft Once More Unto the Breach – Facebook Apps Can Now Access Your Address and Phone Number Dial Zero Speeds You Through Annoying Customer Service Menus Complete Dropquest 2011 and Receive Free Dropbox Storage Desktop Computer versus Laptop Wallpaper The Kids Have No Idea What Old Tech Is [Video]

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • Developer Training – A Conclusive Summary- Part 5

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 We have now reached the end of our series about developer training.  I hope you have come away thinking that training is the best way to advance in your company and that you are looking for training opportunities right now.  If you’re still not convinced here are a few things to keep in mind:  Training benefits the employer and the employee. A well trained employee is a happy employee, and a happy employee is more efficient and productive. Training an employee might be expensive, but it is less expensive than hiring a new person. Whether you are looking at him from the employee’s or the company’s point of view, there are always advantages to training. A Broader View This series is definitely written for Developer Training but it is not limited to developers only. There are IT Pro, System Admins, DBAs as well many other technology professionals; this article series is for all professionals in the world. The concepts and take away will remain common across all the platform and regardless of technology affiliation. Pass the Knowledge If I have to pick one advise which is extremely important related to training, I will pick – pass the knowledge. Once you have decided in favor of training, there is more to it than simply showing up and staying awake.  It is always a good idea to take notes – at the very least it will help you stay awake, but they will often serve as a good way to remember your training when you go back to work.  You can also use them to pass your new knowledge on to fellow employees, which can be very fun and rewarding. Right Place, Right Time and Right Training There are so many ways to get developer training.  In-person and on the job training is easy to come by and is the most usual type of training, but don’t overlook my favorite type of training: On Demand.  Being able to learn at your own pace, own place and on your own time will make training a realistic goal for almost every employee. I can think of nothing more important in life than furthering your education.  Especially when you work in a field that is constantly changing – like technology.  Whether you like it or not, training is incredibly important.  That is why I feel it is so important to receive training.  And because there are so many different training formats – live, online, through books, through people – I am certain that we all can find a way to be trained that best suits our goals and personalities. The Teacher Within If you think of anyone who is a master of the technology field or an incredibly successful developer (the obvious examples that spring to mind are Steve Jobs or Bill Gates), you will also find a teacher.  Both these individuals spent their lives developing better technology, but also educating other developers and the public about how to use these technologies and how it can change your life for the better.  I think that we all should strive to be like these wonderful teachers.  We might not be able to change the world, but we can certainly change a few lives around us. Even if we never turn into trainers ourselves , being trained as a student can be a good exercise.  We learn a lot and become better employees – and it would not be a stretch to say that this makes us better individuals, as well. Final Say I think learning and growing in your chosen field is not only a good idea, career-wise, but can be fun, too!  I for one never feel more alive than when I am learning about something I am really passionate about.  I think my job title – technology evangelist – explains how enthusiastic I am about this subject.  But please don’t think that I am thinking of this as someone who wants to train and educate others (although this is also one of my passions).  I am also a passionate student.  I enjoy learning new things and am always on the lookout for new ways to learn and new people to learn from. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >