Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 540/604 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • android client not working [migrated]

    - by Syeda Zunairah
    i have a java client and c# server the server code is static Socket listeningSocket; static Socket socket; static Thread thrReadRequest; static int iPort = 4444; static int iConnectionQueue = 100; static void Main(string[] args) { Console.WriteLine(IPAddress.Parse(getLocalIPAddress()).ToString()); try { listeningSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); //listeningSocket.Bind(new IPEndPoint(0, iPort)); listeningSocket.Bind(new IPEndPoint(IPAddress.Parse(getLocalIPAddress()), iPort)); listeningSocket.Listen(iConnectionQueue); thrReadRequest = new Thread(new ThreadStart(getRequest)); thrReadRequest.Start(); } catch (Exception e) { Console.WriteLine("Winsock error: " + e.ToString()); //throw; } } static private void getRequest() { int i = 0; while (true) { i++; Console.WriteLine("Outside Try i = {0}", i.ToString()); try { socket = listeningSocket.Accept(); // Receiving //byte[] rcvLenBytes = new byte[4]; //socket.Receive(rcvLenBytes); //int rcvLen = System.BitConverter.ToInt32(rcvLenBytes, 0); //byte[] rcvBytes = new byte[rcvLen]; //socket.Receive(rcvBytes); //String formattedBuffer = System.Text.Encoding.ASCII.GetString(rcvBytes); byte[] buffer = new byte[socket.SendBufferSize]; int iBufferLength = socket.Receive(buffer, 0, buffer.Length, 0); Console.WriteLine("Received {0}", iBufferLength); Array.Resize(ref buffer, iBufferLength); string formattedBuffer = Encoding.ASCII.GetString(buffer); Console.WriteLine("Data received by Client: {0}", formattedBuffer); if (formattedBuffer == "quit") { socket.Close(); listeningSocket.Close(); Environment.Exit(0); } Console.WriteLine("Inside Try i = {0}", i.ToString()); Thread.Sleep(500); } catch (Exception e) { //socket.Close(); Console.WriteLine("Receiving error: " + e.ToString()); Console.ReadKey(); //throw; } finally { socket.Close(); //listeningsocket.close(); } } } static private string getLocalIPAddress() { IPHostEntry host; string localIP = ""; host = Dns.GetHostEntry(Dns.GetHostName()); foreach (IPAddress ip in host.AddressList) { if (ip.AddressFamily == AddressFamily.InterNetwork) { localIP = ip.ToString(); break; } } return localIP; } } and the jave android code is private TCPClient mTcpClient; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final EditText editText = (EditText) findViewById(R.id.edit_message); Button send = (Button)findViewById(R.id.sendbutton); // connect to the server new connectTask().execute(""); send.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { String message = editText.getText().toString(); //sends the message to the server if (mTcpClient != null) { mTcpClient.sendMessage(message); } editText.setText(""); } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.main, menu); return true; } public class connectTask extends AsyncTask<String,String,TCPClient> { @Override protected TCPClient doInBackground(String... message) { mTcpClient = new TCPClient(new TCPClient.OnMessageReceived() { @Override public void messageReceived(String message) { publishProgress(message); } }); mTcpClient.run(); return null; } @Override protected void onProgressUpdate(String... values) { super.onProgressUpdate(values); } } } when i run the server it gives output of try i=1. can any one tell me what to do next

    Read the article

  • Powershell script to change screen Orientation

    - by user161964
    I wrote a script to change Primary screen orientation to portrait. my screen is 1920X1200 It runs and no error reported. But the screen does not rotated as i expected. The code was modified from Set-ScreenResolution (Andy Schneider) Does anybody can help me take a look? some reference site: 1.set-screenresolution http://gallery.technet.microsoft.com/ScriptCenter/2a631d72-206d-4036-a3f2-2e150f297515/ 2.C code for change oridentation (MSDN) Changing Screen Orientation Programmatically http://msdn.microsoft.com/en-us/library/ms812499.aspx my code as below: Function Set-ScreenOrientation { $pinvokeCode = @" using System; using System.Runtime.InteropServices; namespace Resolution { [StructLayout(LayoutKind.Sequential)] public struct DEVMODE1 { [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] public string dmDeviceName; public short dmSpecVersion; public short dmDriverVersion; public short dmSize; public short dmDriverExtra; public int dmFields; public short dmOrientation; public short dmPaperSize; public short dmPaperLength; public short dmPaperWidth; public short dmScale; public short dmCopies; public short dmDefaultSource; public short dmPrintQuality; public short dmColor; public short dmDuplex; public short dmYResolution; public short dmTTOption; public short dmCollate; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] public string dmFormName; [MarshalAs(UnmanagedType.U4)] public short dmDisplayOrientation public short dmLogPixels; public short dmBitsPerPel; public int dmPelsWidth; public int dmPelsHeight; public int dmDisplayFlags; public int dmDisplayFrequency; public int dmICMMethod; public int dmICMIntent; public int dmMediaType; public int dmDitherType; public int dmReserved1; public int dmReserved2; public int dmPanningWidth; public int dmPanningHeight; }; class User_32 { [DllImport("user32.dll")] public static extern int EnumDisplaySettings(string deviceName, int modeNum, ref DEVMODE1 devMode); [DllImport("user32.dll")] public static extern int ChangeDisplaySettings(ref DEVMODE1 devMode, int flags); public const int ENUM_CURRENT_SETTINGS = -1; public const int CDS_UPDATEREGISTRY = 0x01; public const int CDS_TEST = 0x02; public const int DISP_CHANGE_SUCCESSFUL = 0; public const int DISP_CHANGE_RESTART = 1; public const int DISP_CHANGE_FAILED = -1; } public class PrmaryScreenOrientation { static public string ChangeOrientation() { DEVMODE1 dm = GetDevMode1(); if (0 != User_32.EnumDisplaySettings(null, User_32.ENUM_CURRENT_SETTINGS, ref dm)) { dm.dmDisplayOrientation = DMDO_90 dm.dmPelsWidth = 1200; dm.dmPelsHeight = 1920; int iRet = User_32.ChangeDisplaySettings(ref dm, User_32.CDS_TEST); if (iRet == User_32.DISP_CHANGE_FAILED) { return "Unable To Process Your Request. Sorry For This Inconvenience."; } else { iRet = User_32.ChangeDisplaySettings(ref dm, User_32.CDS_UPDATEREGISTRY); switch (iRet) { case User_32.DISP_CHANGE_SUCCESSFUL: { return "Success"; } case User_32.DISP_CHANGE_RESTART: { return "You Need To Reboot For The Change To Happen.\n If You Feel Any Problem After Rebooting Your Machine\nThen Try To Change Resolution In Safe Mode."; } default: { return "Failed"; } } } } else { return "Failed To Change."; } } private static DEVMODE1 GetDevMode1() { DEVMODE1 dm = new DEVMODE1(); dm.dmDeviceName = new String(new char[32]); dm.dmFormName = new String(new char[32]); dm.dmSize = (short)Marshal.SizeOf(dm); return dm; } } } "@ Add-Type $pinvokeCode -ErrorAction SilentlyContinue [Resolution.PrmaryScreenOrientation]::ChangeOrientation() }

    Read the article

  • Why does Cacti keep waiting for dead poller processes?

    - by Oliver Salzburg
    sorry for the length I am currently setting up a new Debian (6.0.5) server. I put Cacti (0.8.7g) on it yesterday and have been battling with it ever since. Initial issue The initial issue I was observing, was that my graphs weren't updating. So I checked my cacti.log and found this concerning message: POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting. That can't be good, right? So I went checking and started poller.php myself (via sudo -u www-data php poller.php --force). It will pump out a lot of message (which all look like what I would expect) and then hang for a minute. After that 1 minute, it will loop the following message: Waiting on 1 of 1 pollers. This goes on for 4 more minutes until the process is forcefully ended for running longer than 300s. So far so good I went on for a good hour trying to determine what poller might still be running, until I got to the conclusion that there simply is no running poller. Debugging I checked poller.php to see how that warning is issued and why. On line 368, Cacti will retrieve the number of finished processes from the database and use that value to calculate how many processes are still running. So, let's see that value! I added the following debug code into poller.php: print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; Result This will print the following within seconds of starting poller.php: Finished: 0 - Started: 1 Waiting on 1 of 1 pollers. Finished: 1 - Started: 1 So the values are being read and are valid. Until we get to the part where it keeps looping: Finished: - Started: 1 Waiting on 1 of 1 pollers. Suddenly, the value is gone. Why? Putting var_dump() in there confirms the issue: NULL Finished: - Started: 1 Waiting on 1 of 1 pollers. The return value is NULL. How can that be when querying SELECT COUNT()...? (SELECT COUNT() should always return one result row, shouldn't it?) More debugging So I went into lib\database.php and had a look at that db_fetch_cell(). A bit of testing confirmed, that the result set is actually empty. So I added my own database query code in there to see what that would do: $finished_processes = db_fetch_cell("SELECT count(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'"); print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; $mysqli = new mysqli("localhost","cacti","cacti","cacti"); $result = $mysqli->query("SELECT COUNT(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00';"); $row = $result->fetch_assoc(); var_dump( $row ); This will output Finished: - Started: 1 array(1) { ["COUNT(*)"]=> string(1) "2" } Waiting on 1 of 1 pollers. So, the data is there and can be accessed without any problems, just not with the method Cacti is using? Double-check that! I enabled MySQL logging to make sure I'm not imagining things. Sure enough, when the error message is looped, the cacti.log reads as if it was querying like mad: 06/29/2012 08:44:00 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:01 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:02 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" But none of these queries are logged my MySQL. Yet, when I add my own database query code, it shows up just fine. What the heck is going on here?

    Read the article

  • Macports: port install jpeg fails

    - by Philipp Keller
    History: installed MacPorts on Leopard upgraded to Snow Leopard uninstall all ports reinstalled XCode sudo port uninstall jpeg sudo port install jpeg DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port jpeg. DEBUG: Requested variant i386 is not provided by port jpeg. DEBUG: Requested variant macosx is not provided by port jpeg. --- Computing dependencies for jpeg DEBUG: Executing org.macports.main (jpeg) DEBUG: Skipping completed org.macports.fetch (jpeg) DEBUG: Skipping completed org.macports.checksum (jpeg) DEBUG: Skipping completed org.macports.extract (jpeg) DEBUG: Skipping completed org.macports.patch (jpeg) --- Configuring jpeg DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (jpeg) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-arch x86_64' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local' sh: line 0: cd: /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a: No such file or directory Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 while executing "$procedure $targetname" Warning: the following items did not execute (for jpeg): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. To report a bug, see http://guide.macports.org/#project.tickets

    Read the article

  • How can I run supervisord without using root?

    - by Jason Baker
    I seem to be having trouble figuring out why supervisord won't run as a non-root user. If I start it with the user set to jason (pid 1000), I get the following in the log file: 2010-05-24 08:53:32,143 CRIT Set uid to user 1000 2010-05-24 08:53:32,143 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:53:32,189 INFO RPC interface 'supervisor' initialized 2010-05-24 08:53:32,189 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:53:32,189 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:53:32,190 INFO daemonizing the supervisord process 2010-05-24 08:53:32,191 INFO supervisord started with pid 3444 ...then the process dies for some unknown reason. If I start it without sudo (under the user jason), I get similar output: 2010-05-24 08:51:32,859 INFO supervisord started with pid 3306 2010-05-24 08:52:15,761 CRIT Can't drop privilege as nonroot user 2010-05-24 08:52:15,761 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:52:15,807 INFO RPC interface 'supervisor' initialized 2010-05-24 08:52:15,807 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:52:15,807 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:52:15,808 INFO daemonizing the supervisord process 2010-05-24 08:52:15,809 INFO supervisord started with pid 3397 ...and it still doesn't run. If it's any help, here's the supervisord.conf file I'm using: [unix_http_server] file=/tmp/supervisor.sock ; path to your socket file [supervisord] logfile=./supervisord.log ; supervisord log file logfile_maxbytes=50MB ; maximum size of logfile before rotation logfile_backups=10 ; number of backed up logfiles loglevel=debug ; info, debug, warn, trace pidfile=./supervisord.pid ; pidfile location nodaemon=false ; run supervisord as a daemon minfds=1024 ; number of startup file descriptors minprocs=200 ; number of process descriptors user=jason ; default user childlogdir=./supervisord/ ; where child log files will live [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use unix:// schem for a unix sockets. [include] # Uncomment this line for celeryd for Python files=celeryd.conf # Uncomment this line for celeryd for Django. ;files=django/celeryd.conf ...and here's celeryd.conf: [program:celery] command=bin/celeryd --loglevel=INFO --logfile=./celeryd.log environment=PYTHONPATH='./tsched_worker', JIVA_DB_PLATFORM='oracle', ORACLE_HOME='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server', LD_LIBRARY_PATH='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib', TNS_ADMIN='/home/jason', CELERY_CONFIG_MODULE='tsched_worker.celeryconfig' directory=. user=jason numprocs=1 stdout_logfile=/var/log/celeryd.log stderr_logfile=/var/log/celeryd.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 ; if rabbitmq is supervised, set its priority higher ; so it starts first priority=998 Can anyone help me figure out what's going on?

    Read the article

  • XRDP: window manager not starting

    - by niboshi
    I have setup my Ubuntu server so that I can connect and login to XRDP from Windows remote desktop. My problem is that after logging in, no window-manager is started. It only displays a single gnome-terminal with no border and gray meshed background. It seems that /usr/sbin/xrdp-sesman itself is running (from observation of ps and /var/run/xrdp/xrdp-sesman.pid). I put debugging line like touch /home/myname/aaaaa into ~/startwm.sh or /etc/xrdp/startwm.sh, but the file aaaaa did not generated after logging in, so these scripts have not been executed. (Both of them have chmod +x permission.) Am I missing some configuration file, or is there any way of further inspection? Any help is appreciated. Thanks. Contents of /etc/xrdp/sesman.ini [Globals] ListenAddress=127.0.0.1 ListenPort=3350 EnableUserWindowManager=0 # or 1 UserWindowManager=startwm.sh DefaultWindowManager=startwm.sh # or commented-out [Security] AllowRootLogin=1 MaxLoginRetry=4 TerminalServerUsers=tsusers TerminalServerAdmins=tsadmins [Sessions] MaxSessions=10 KillDisconnected=0 IdleTimeLimit=0 DisconnectedTimeLimit=0 [Logging] LogFile=/var/log/xrdp-sesman.log LogLevel=DEBUG EnableSyslog=0 SyslogLevel=DEBUG [X11rdp] param1=-bs param2=-ac param3=-nolisten param4=tcp [Xvnc] param1=-bs param2=-ac param3=-nolisten param4=tcp Contents of /var/log/xrdp-sesman.log after logging in: [20120402-21:29:34] [CORE ] starting sesman with pid 11064 [20120402-21:29:34] [INFO ] listening... [20120402-21:29:39] [INFO ] scp thread on sck 7 started successfully [20120402-21:29:39] [INFO ] granted TS access to user myname [20120402-21:29:39] [INFO ] starting Xvnc session... [20120402-21:29:40] [INFO ] starting xrdp-sessvc - xpid=11074 - wmpid=11073 [20120402-21:29:49] [INFO ] session 11072 - user myname- terminated Process tree Below is a part of ps aufx output during xrdp session: xrdp 12344 0.0 0.4 22856 8732 ? Sl Apr02 0:01 /usr/sbin/xrdp root 12346 0.0 0.0 15672 2000 ? S Apr02 0:00 /usr/sbin/xrdp-sesman root 24346 0.0 0.0 3780 872 ? S 00:00 0:00 \_ /usr/sbin/xrdp-sessvc 24348 24347 myname 24347 0.4 0.6 76468 13700 ? Sl 00:00 0:14 \_ gnome-terminal myname 24362 0.0 0.0 2220 716 ? S 00:00 0:00 | \_ gnome-pty-helper myname 24363 0.0 0.2 6912 5268 pts/13 Ss 00:00 0:00 | \_ bash myname 27902 0.0 0.0 2824 1096 pts/13 R+ 00:53 0:00 | \_ ps aufx myname 24348 0.0 0.9 24984 19216 ? S 00:00 0:01 \_ Xvnc :18 -geometry 1920x1080 -depth 24 -rfbauth /home/myname/.vnc/sesman_myname_passwd -bs -ac -nolisten tcp root 24349 0.0 0.0 16596 1304 ? Sl 00:00 0:00 \_ xrdp-chansrv Environment Ubuntu 11.10 Oneiric xrdp version: 0.5.0~20100303cvs-6ubuntu2

    Read the article

  • Setting up Edimax EW-7206APg as Universal Repeater

    - by Ondra Žižka
    Hi, I've troubles setting up Edimax EW-7206APg as a Universal Repeater. I've read few manuals, but they are unclear on certain points. I've managed the repeater to get to a state when it's in a "connected" state. I've set the same WPA passphrase as the router has because I haven't seen any other place to set it at. These are my settings: System Uptime 0day:1h:33m:11s Hardware Version Rev. A Runtime Code Version 1.32 Wireless Configuration Mode Universal Repeater ESSID edimax Channel Number 6 Security WPA-shared key BSSID 00:c0:9f:40:bd:38 Associated Clients 0 Wireless Repeater Interface Configuration ESSID Dusan Security WPA BSSID 00:4f:62:23:8f:7e State Connected LAN Configuration IP Address 192.168.0.10 Subnet Mask 255.255.255.0 Default Gateway 192.168.0.1 MAC Address 00:c0:9f:40:bd:37 This is ipconfig /all: Prípona DNS podle pripojení . . . : riomail.cz Popis . . . . . . . . . . . . . . : Intel(R) PRO/Wireless 2200BG Network Connection Fyzická Adresa. . . . . . . . . . : 00-0E-35-3D-77-68 Protokol DHCP povolen . . . . . . : Ano Automatická konfigurace povolena : Ano Adresa IP . . . . . . . . . . . . : 192.168.0.5 Maska podsíte . . . . . . . . . . : 255.255.255.0 Výchozí brána . . . . . . . . . . : 192.168.0.1 Server DHCP . . . . . . . . . . . : 192.168.0.1 Servery DNS . . . . . . . . . . . : 94.74.192.252 94.74.192.244 I can ping the repeater, I can ping the root AP, but not a DNS server or any other IP beyond the root AP. Anyone has an idea what's wrong? Thanks, Ondra

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

  • How to know the source of certain TCP traffic on AIX

    - by A.Rashad
    We have two AIX boxes, one for production system and another for testing. both systems are running ATM machine switches, where the ATM device is connected via TCP socket. we had an issue on production system where the machine would power off or get disconnected but the netstat -na | grep <IP of machine > would still mention that the socket is up when simulated that case on the UAT environment, the problem did not happen, where the socket would terminate in 3 to 5 minutes. when sniffed on the traffic between the machine and ATM we found that no traffic takes place on production while there is some sort of heartbeat on UAT. but it is not initiated by the application. $>tcpdump | grep -v "10.2.2.71" | grep -v "HSRP" | grep "10.3.1.30" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en6, link-type 1, capture size 96 bytes 09:08:13.323421 IP server073.afs3-callback > 10.3.1.30.impera: . 278204201:278204202(1) ack 3307884029 win 164 09:08:13.335334 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:08:23.425771 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:08:23.425789 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 09:09:13.628985 IP server073.afs3-callback > 10.3.1.30.impera: . 0:1(1) ack 1 win 164 09:09:13.633900 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:09:23.373634 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:09:23.373647 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 while on production, that traffic is not there. we want to know where this traffic is initiated from to implement on production to sense disconnection our comms parameters are: tcp_keepcnt = 2 tcp_keepidle = 100 tcp_keepinit = 150 tcp_keepintvl = 150 tcp_finwait2 = 1200 can anyone help? Editing Question: One point I missed because I was rushing to a meeting. the difference between the Production and UAT in setup is that in Production we have an application called F5 working as load balancer between the ATMs and the AIX box, while it is a direct connection through MPLS in case of UAT. note: we had one MPLS and one GPRS connected ATMs on UAT, and both connections terminated when unplugged in about 4 minutes Edit 2 the no -o tcp_timewait command returns 1 in both Production and UAT

    Read the article

  • Firefox 3.5.6 causes entire computer to freeze

    - by Anthony Aziz
    Here's the situation: Environment: Just installed a fresh copy of Win7 Pro 32-bit to NTFS partition on 750GB SATA drive Hardware: E8400 3GHz ASUS P5QL Pro 4GB DDR2 1066 RAM EVGA 9800 GTX+ Plenty of cooling, no problems with hardware before Data is stored on a separate partition, including My Documents No security software is yet installed No extensions installed yet Problem: While using Firefox, sometimes the entire computer will freeze/hang. I get no mouse or keyboard input, can't CTRL+ALT+DEL, no "not responding" indication, just a static image on my display. My drivers are all up to date as far as I'm aware (I just installed this copy of Windows last week). I first noticed this when trying to install Xmarks. I went to the Xmarks site and tried to install and it would freeze. I managed to get it installed (Safe mode and the Mozilla addon site worked), but when I go to configure it (log in, etc), the computer freezes. I don't think it's a matter of usage time or memory issues, because while testing, I browsed wallpaper galleries for about 30 minutes, sometimes as many as 12-15 tabs open at a time, without issue. Sometimes I won't even try to install Xmarks at it will hang. I can install (some) other extensions, the only one I've tried is download status bar (which works). What I've done to try to fix: Restarted (duh) Windows safe mode Completely remove Firefox and install it to a new directory, according to Mozilla's KB (I haven't tried the profile manager, though I assume this does the same thing, except perhaps more thoroughly) Some BIOS changes, including Power options, disabling oveclocking (it was a modest overclock on the CPU, which has run Win7 beta and RC for almost a year now) Memtest Used another Windows user profile, same tragic results I'm STUCK now, with no idea what to do. I'm using Chrome as my main browser at the moment, but that's not something I want to be stuck with. I like Firefox and want to use it. I'm going to try creating a new profile first. One thing I did notice: I started leaving task manager and performance monitor open when anticipating (but dreading) a freeze. firefox.exe had low CPU and low memory, but it looked like overall disk usage was seeing some spikes on the small graph Performance Monitor gives you. I saw on one blog post a fellow using XP moved his Local Settings directory from a separate drive to his main drive, and that solved it, but I don't think my AppData directory is on my D: drive, and that's on the same physical device anyways. Still, something that might be worth trying. I'd extremely appreciate any help. Thanks very much. I really don't want to reinstall Windows from scratch again :( Anthony Aziz

    Read the article

  • csync2 ERROR: Connection to remote host failed

    - by Emil Salama
    I was unable to find any articles to answer this question, so my best bet was to post this here: Scenario We have 2x application servers in production hosting a PHP website and I would like some folders to be syncronized between the 2, the same was setup for the development environment with no issues, I've followed all instructions from the URL "http://www.cloudedify.com/synchronising-files-in-cloud-with-csync2/", I still seem to have the same result, firewall has been disabled on both boxes for troubeshooting purposes: Config Files: cysnc2.cfg nossl * *; group production { host server1; host server2; key /etc/csync-production-group.key; include /etc/httpd/sites-available; include /xxxxxx/public_html/files include /xxxxxxx/magento/media/catalog/product include /xxxxxxx/magento/media/brands exclude *.log; exclude /xxxx/public_html/file/cache; exclude /xxxxx/public_html/magento/var/cache; exclude /xxxx/public_html/logs; exclude /xxxxx/public_html/magento/var/log; backup-directory /data/sync-conflicts/; backup-generations 2; auto younger; } /etc/xinetd.d/csync2 csync2.cfg service csync2 { disable = no flags = REUSE socket_type = stream wait = no user = root group = root server = /usr/sbin/csync2 server_args = -i -D /data/sync-db/ port = 30865 type = UNLISTED log_type = FILE /data/logs/csync2/csync2-xinetd.log log_on_failure += USERID } I've made sure that the daemon is listening on both server on port 30865 and the keys matched on both servers I've run a tcpdump on each server, output as follows: 12:20:31.366771 IP server1.49919 server2.csync2: Flags [S], seq 445156159, win 14600, options [mss 1460,sackOK,TS val 794864936 ecr 0,nop,wscale 7], length 0 12:20:31.366810 IP server2.csync2 server1.49919: Flags [S.], seq 450593575, ack 445156160, win 14480, options [mss 1460,sackOK,TS val 794798911 ecr 794864936,nop,wscale 7], length 0 12:20:31.367101 IP server1.49919 server2.csync2: Flags [.], ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 0 12:20:31.367138 IP server1.49919 server2.csync2: Flags [P.], seq 1:9, ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 8 12:20:31.367147 IP server2.csync2 server1.49919: Flags [.], ack 9, win 114, options [nop,nop,TS val 794798912 ecr 794864937], length 0 12:20:31.368625 IP server2.csync2 server1.49919: Flags [R.], seq 1, ack 9, win 114, options [nop,nop,TS val 794798913 ecr 794864937], length 0 Is there anything else i'm missing or should be doing?

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • network is not available even when cisco vpn client is connected. wrong route?

    - by javapowered
    I'm using Vodafone 3G modem. I've disabled other network devices in the system (ethernet, wifi, wimax) turned off firewall and antivirus. cisco vpn client connects successfully but I still can not access computer 192.168.147.120 (as well as any other computer from network). Any suggestions are welcome as I don't know what to do. ipconfig /all and route print commands (translated to english): Microsoft Windows [Version 6.1.7601] (C) Microsoft Corporation (Microsoft Corp.), 2009. All rights reserved. C: \ Users \ Oleg> ipconfig / all IP Configuration for Windows The name of the computer. . . . . . . . . : OlegPC The primary DNS-suffix. . . . . . : Node Type. . . . . . . . . . . . . : Hybrid IP-routing is enabled. . . . : No WINS-proxy enabled. . . . . . . : No Ethernet adapter Local Area Connection 4: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Cisco Systems VPN Adapter Physical Address. . . . . . . . . 00-05-9A-3C-78-00 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Local IPv6-address channel. . . : Fe80:: c073: 41b2: 852f: eb87% 26 (Preferred) IPv4-address. . . . . . . . . . . . : 10.53.127.204 (Preferred) The subnet mask. . . . . . . . . . : 255.0.0.0 Default Gateway. . . . . . . . . : IAID DHCPv6. . . . . . . . . . . : 536872346 DUID the client DHCPv6. . . . . . . 00-01-00-01-14-6F-4C-8D-60-EB-69-85-10-2D DNS-servers. . . . . . . . . . . : Fec0: 0:0: ffff:: 1% 1 fec0: 0:0: ffff:: 2% 1 fec0: 0:0: ffff:: 3% 1 NetBios over TCP / IP. . . . . . . . : Disabled Adapter mobile broadband connection through a broadband adapter mobile communications: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Vodafone Mobile Broadband Network Adapter (Huawei) Physical Address. . . . . . . . . 58-2C-80-13-92-63 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes IPv4-address. . . . . . . . . . . . : 10.229.227.77 (Preferred) The subnet mask. . . . . . . . . . : 255.255.255.252 Default Gateway. . . . . . . . . : 10.229.227.78 DNS-servers. . . . . . . . . . . : 163.121.128.134 212.103.160.18 NetBios over TCP / IP. . . . . . . . : Disabled Tunnel adapter isatap. {737FF02E-D473-4F91-840E-2A4DD293FC12}: State of the environment. . . . . . . . : DNS Suffix. DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Adapter Microsoft ISATAP # 3 Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Tunnel adapter isatap. {EF585226-5B07-4446-A5A4-CB1B8E4B13AC}: State of the environment. . . . . . . . : DNS Suffix. DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Adapter Microsoft ISATAP # 4 Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes IPv6-address. . . . . . . . . . . . : 2001:0:4137:9 e76: ea: b77: f51a: 1cb2 (Basically d) Local IPv6-address channel. . . : Fe80:: ea: b77: f51a: 1cb2% 16 (Preferred) Default Gateway. . . . . . . . . ::: NetBios over TCP / IP. . . . . . . . : Disabled C: \ Users \ Oleg> route print ================================================== ========================= List of interfaces 26 ... 00 05 9a 3c 78 00 ...... Cisco Systems VPN Adapter 23 ... 58 2c 80 13 92 63 ...... Vodafone Mobile Broadband Network Adapter (Huawei) 1 ........................... Software Loopback Interface 1 19 ... 00 00 00 00 00 00 00 e0 Adapter Microsoft ISATAP # 3 20 ... 00 00 00 00 00 00 00 e0 Adapter Microsoft ISATAP # 4 16 ... 00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface ================================================== ========================= IPv4 Route Table ================================================== ========================= Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.229.227.78 10.229.227.77 296 10.0.0.0 255.0.0.0 On-link 10.53.127.204 286 10.6.93.21 255,255,255,255 10.0.0.1 10.53.127.204 100 10.13.50.12 255,255,255,255 10.0.0.1 10.53.127.204 100 10.53.8.0 255.255.252.0 10.0.0.1 10.53.127.204 100 10.53.127.204 255.255.255.255 On-link 10.53.127.204 286 10.53.128.0 255.255.248.0 10.0.0.1 10.53.127.204 100 10.53.148.0 255,255,255,240 10.0.0.1 10.53.127.204 100 10.53.148.16 255,255,255,240 10.0.0.1 10.53.127.204 100 10.229.227.76 255.255.255.252 On-link 10.229.227.77 296 10.229.227.77 255.255.255.255 On-link 10.229.227.77 296 10.229.227.79 255.255.255.255 On-link 10.229.227.77 296 10.255.255.255 255.255.255.255 On-link 10.53.127.204 286 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.147.0 255,255,255,240 10.0.0.1 10.53.127.204 100 192.168.147.96 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,112 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,128 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,144 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,224 255,255,255,240 10.0.0.1 10.53.127.204 100 192.168.214.0 255.255.255.0 10.0.0.1 10.53.127.204 100 192.168.215.0 255.255.255.0 10.0.0.1 10.53.127.204 100 194.247.133.19 255,255,255,255 10.0.0.1 10.53.127.204 100 213,247,231,194 255,255,255,255 10.229.227.78 10.229.227.77 100 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.229.227.77 296 224.0.0.0 240.0.0.0 On-link 10.53.127.204 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.229.227.77 296 255.255.255.255 255.255.255.255 On-link 10.53.127.204 286 ================================================== ========================= Persistent Routes: None IPv6 Route Table ================================================== ========================= Active Routes: If Metric Network Destination Gateway 16 58:: / 0 On-link 1306:: 1 / 128 On-link 16 58 2001:: / 32 On-link 16 306 2001: 0:4137:9 e76: ea: b77: f51a: 1cb2/128 On-link 16 306 fe80:: / 64 On-link 26 286 fe80:: / 64 On-link 16 306 fe80:: ea: b77: f51a: 1cb2/128 On-link 26 286 fe80:: c073: 41b2: 852f: eb87/128 On-link 1306 ff00:: / 8 On-link 16 306 ff00:: / 8 On-link 26 286 ff00:: / 8 On-link ================================================== ========================= Persistent Routes: None C: \ Users \ Oleg>

    Read the article

  • Home and Small to Medium Enterprise network manufacturer choice, Netgear, Linksys or D-Link ?

    - by Kedare
    (Please don't close this post, it's a serious post so... Be cool, no trolls please, I need an answer ;p) Hello, I am looking for an alternative to Cisco (too expensive for me !) for semi-pro utilization (at home but with advanced feature (I'm studying in IT)) and in small/medium enterprises. I think I will choose between LinkSys (Including Cisco Small Business), Netgear and D-Link, but I've never really used these products, that what I need is a manufacturer that make "almost" all type of networking equipment (Like Cisco but cheaper..), here are my needs : I need almost all my products to be rackable I need a good warranty (Netgear lifetime waranty rulez!) I need an "unified" network environment I made a little comparison of the characteristics that interest me after hours of search on Internet (based on result found on many websites): (Prices are based on the ldlc-pro.com french website) Hotline/Support Quality: Netgear : Not so bad Linksys : Not so bad D-Link : Poor! Most common Warranty: Netgear : Unlimited Lifetime Warranty! Linksys : Limited 3 years warranty D-Link : Limited 5 years warranty (Unlimited in US but I'm on France :(...) VPN protocols compatibles with routers on endpoint mode: Netgear : Only IPSEC :( Linksys : IPSEC, PPTP, L2TP D-Link : IPSEC, PPTP, L2TP Cheaper 8 ports Gb switch : Netgear : 30€ Linksys : 47€ D-Link : 30€ Cheaper 48 ports + 1Gb uplink(s) administrable switch : Netgear : 263€ Linksys : 630€ D-Link : 600€ Cheaper VPN router : Netgear : 100€ Linksys : 80€ D-Link : 60€ Cheaper rackable switch : Netgear : 50€ Linksys : 87€ D-Link : 50€ Cheaper rackable and administrable switch : Netgear : 120€ Linksys : 370€ D-Link : 171€ Netgear and D-Link are in the same range of price, where Linksys is more expensives. I've searched for some other criteria ( the full comparison is here, in french with shop/source links: http://forums.jeuxonline.info/showthread.php?t=1072280 ) and made a final score for each manufacturer : SCORE including IP camera sub-score: Netgear : 6.2/10 Linksys : 7.3/10 D-Link : 7.0/10 SCORE excluding IP camera sub-score: Netgear : 6.9/10 Linksys : 7.0/10 D-Link : 6.7/10 On both case, Linksys wins. So here is my little comparison, but because I've never really used these stuffs, I need your help to make a decision on witch manufacturer choose for both my personnal and corporate use. So here are the questions : What manufacturer do you recommend me (Not cisco (except Small business)) ? Why ? Have you called the call center of the customer support of one of these manufacturer ? How it was ? Did you had problems or bad experiences with these equipments ? Any other advices ? ;) Thank you !

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Monit won't run

    - by Yaniro
    I have two identical EC2 instances (the second is a replica of the first), running Gentoo. The first instance has monit running which monitors a single process and some system resources and functions great. In the second instance, monit runs but quits right away. The configuration is similar on both instances so are the versions of monit. monit.log shows: [GMT Oct 3 08:36:41] info : monit daemon with PID 5 awakened Final lines on strace monit show: write(2, "monit daemon with PID 5 awakened"..., 33monit daemon with PID 5 awakened ) = 33 time(NULL) = 1349252827 open("/etc/localtime", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 fstat64(4, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb773a000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\0\0"..., 4096) = 118 _llseek(4, -6, [112], SEEK_CUR) = 0 read(4, "\nGMT0\n", 4096) = 6 close(4) = 0 munmap(0xb773a000, 4096) = 0 write(3, "[GMT Oct 3 08:27:07] info :"..., 33) = 33 write(3, "monit daemon with PID 5 awakened"..., 33) = 33 waitpid(-1, NULL, WNOHANG) = -1 ECHILD (No child processes) close(3) = 0 exit_group(0) = ? No core dumps (ulimit -c shows unlimited) monit -v shows: monit: Debug: Adding host allow 'localhost' monit: Debug: Skipping redundant host 'localhost' monit: Debug: Skipping redundant host 'localhost' monit: Debug: Adding credentials for user 'xxxx'. Runtime constants: Control file = /etc/monitrc Log file = /var/log/monit/monit.log Pid file = /var/run/monit.pid Id file = /var/run/monit.pid Debug = True Log = True Use syslog = False Is Daemon = True Use process engine = True Poll time = 30 seconds with start delay 0 seconds Expect buffer = 256 bytes Event queue = base directory /var/monit with 100 slots Mail server(s) = xx.xxx.xx.xxx with timeout 30 seconds Mail from = (not defined) Mail subject = (not defined) Mail message = (not defined) Start monit httpd = True httpd bind address = Any/All httpd portnumber = 2812 httpd signature = True Use ssl encryption = False httpd auth. style = Basic Authentication and Host/Net allow list Alert mail to = [email protected] Alert on = All events The service list contains the following entries: System Name = xxxx Monitoring mode = active CPU wait limit = if greater than 20.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert CPU system limit = if greater than 30.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert CPU user limit = if greater than 70.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Swap usage limit = if greater than 25.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Memory usage limit = if greater than 75.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Load avg. (5min) = if greater than 2.0 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Load avg. (1min) = if greater than 4.0 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Process Name = xxxx Group = server Pid file = /var/run/xxxx.pid Monitoring mode = active Start program = '/etc/init.d/xxxx restart' timeout 20 second(s) Stop program = '/etc/init.d/xxxx stop' timeout 30 second(s) Existence = if does not exist 1 times within 1 cycle(s) then restart else if succeeded 1 times within 1 cycle(s) then alert Pid = if changed 1 times within 1 cycle(s) then alert Ppid = if changed 1 times within 1 cycle(s) then alert Timeout = If restarted 3 times within 5 cycle(s) then unmonitor Alert mail to = [email protected] Alert on = All events Alert mail to = [email protected] Alert on = All events ------------------------------------------------------------------------------- monit daemon with PID 5 awakened Ran emerge --sync before emerge -va monit which installed monit v5.3.2. When that didn't work i've downloaded v5.5 from their website and compiled from source which did not work either.

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • Output php mail calls to log file

    - by Tom McQuarrie
    This question relates to the question found here: Find the php script thats sending mails Trying to do the exact same thing but can't get the log to output what I need. Not too experienced with serverfault and ideally I'd post my followup on the original question, or PM adam to see if he ever found a solution, but looks as though server fault doesn't work that way. I can post an "answer" but that's definitely not what this is. I have a script located at /usr/local/bin/sendmail-php-logged, with the following: #!/bin/sh logger -p mail.info sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami) /usr/sbin/sendmail -t -i $* This is logging to /var/log/maillog, but as Adam mentions in his question, none of the server variables work. Output I'm getting is: Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:03 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:05 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:11 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:14 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:29 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:41 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root User ID, current user, and pwd are all working, probably because they're globally accessible script resources, and not specific to PHP, like all the others are. I've tried using other server variables as per labradort's instructions, but no joy. Here's some sample tests: logger -p mail.info sendmail-php SCRIPT_NAME: ${SCRIPT_NAME} logger -p mail.info sendmail-php SCRIPT_FILENAME: ${SCRIPT_FILENAME} logger -p mail.info sendmail-php PATH_INFO: ${PATH_INFO} logger -p mail.info sendmail-php PHP_SELF: ${PHP_SELF} logger -p mail.info sendmail-php DOCUMENT_ROOT: ${DOCUMENT_ROOT} logger -p mail.info sendmail-php REMOTE_ADDR: ${REMOTE_ADDR} logger -p mail.info sendmail-php SCRIPT_NAME: $SCRIPT_NAME logger -p mail.info sendmail-php SCRIPT_FILENAME: $SCRIPT_FILENAME logger -p mail.info sendmail-php PATH_INFO: $PATH_INFO logger -p mail.info sendmail-php PHP_SELF: $PHP_SELF logger -p mail.info sendmail-php DOCUMENT_ROOT: $DOCUMENT_ROOT logger -p mail.info sendmail-php REMOTE_ADDR: $REMOTE_ADDR And the output: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: I'm running php 5.3.10. Unfortunately register_globals is on, for compatibility with legacy systems, but you wouldn't think that would cause the environment variables to stop working. If someone can give me some hints as to why this might not be working I'll be a very happy man :)

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Can't get Passwordless (SSH provided) SFTP working

    - by Shoaibi
    I have chrooted sftp setup as below. # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin without-password StrictModes yes AllowGroups admins clients RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* #Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Subsystem sftp internal-sftp Match group clients ChrootDirectory /var/chroot-home X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/david:/bin/sh Now in this case david can sftp using say filezilla client and he is chrooted to /var/chroot-home/david/. But what if i was to setup a passwordless auth? I have tried pasting his key in /var/chroot-home/david/.ssh/authorized_keys but no use, tried ssh'ing as david to the box and it just stops at "debug1: Sending env LC_CTYPE = C" after i supply it password and there is nothing shown in auth.log, may be because it can't find the homedir. If i do "su - david" as root i see "No directory, logging in with HOME=/" which makes sense. Symlink doesn't help either. I have also tried with: Match group clients ChrootDirectory /var/chroot-home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/var/chroot-home/david:/bin/sh This way if i don't change /var/chroot-home/david to root:root sshd complains about bad ownership or permission modes, and if i do, david can no longer upload/delete anything directly in his home while using sftp from filezilla.

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >