Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 219/861 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Memory limiting solutions for greedy applications that can crash OS?

    - by Hooked
    I use my computer for scientific programming. It has a healthy 8GB of RAM and 12GB of swap space. Often, as my problems have gotten larger, I exceed all of the available RAM. Rather than crashing (which would be preferred), it seems Ubuntu starts loading everything into swap, including Unity and any open terminals. If I don't catch a run-away program in time, there is nothing I can do but wait - it takes 4-5 minutes to switch to a command prompt eg. Ctrl-Alt-F2 where I can kill the offending process. Since my own stupidity is out of scope of this forum, how can I prevent Ubuntu from crashing via thrashing when I use up all of the available memory from a single offending program? At-home experiment*! Open a terminal, launch python and if you have numpy installed try this: >>> import numpy >>> [numpy.zeros((10**4, 10**4)) for _ in xrange(50)] * Warning: may have adverse effects, monitor the process via iotop or top to kill it in time. If not, I'll see you after your reboot.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Office 2010 always reconfiguring itself on startup

    - by Rhys Gibson
    I've just installed Office 2010 Professional Plus (upgrading from Office 2007). It works fine under my admin account, but when I login with my wifes non-admin account, every time I open a document or start an app (Word, Excel, Publisher ...) Office 2010 goes through its configuration process (starting the the standard install dialog and then running the bootstrap process) before it loads the app - which wastes 2-3 minutes. Once it's done this, the app runs fine and I can make setting changes that are remembered when it restarts, but I can't work out why it thinks it needs to configure the app each time. Any thoughts?

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • How do you support your code post employment end?

    - by James
    What is the process for leaving a company (or even a group/division) in terms of code support? Is it best to handle all questions? Do you give the remaining developers access to yourself as a future resource? If so, is there a way to not give full access? I've experienced first hand where answers about the general software arthitecture from the initial developer would be invaluable. I understand that if serious assistance is needed, than it becomes a typical case of employment negotiation as a support contract. However, should serious assistance be required, what steps can you make to ease that process of contacting you? I was thinking of doing something like making a (YOUR_NAME)_codesupport @ (YOUR_FAVORITE_EMAIL_CLIENT).com address. My Situation Specifics: I'm a co-op student, and as such bounce around companies on 4-month stints. This means introducing myself to a lot of new code bases, as well as leaving a fair share of orphaned code behind when I leave a company. I feel bad if I leave junk code around.

    Read the article

  • How to check if a server runs in pressing mode

    - by Ice
    Hi there, Layout: i have at customer side a server (win2003 R2 SP2 standard edition 32-bit) with a sql-server 2005 and some databases. This system starts with the /3GB-Switch. The system reports 3.25 GB RAM and taskmanager reports the process of sqlserver.exe with 2758255 K as the process with the highest consumption. The OS separates RAM for applications and for itself, normaly 50:50. But here we have the /3GB-Switch aktivated and i think the part for the applications is more than 50% of RAM. Knowledge (or better not knowledge): Somebody told me that if the OS runs out of memory within his part of RAM, the server runs into pressing mode. Questions: What is this pressing mode? Is pressing mode possible at all in this szenario? What should be done to get more performance out of this sql-server, beside optimizing the database and all this stuff.

    Read the article

  • Script to set "Hide file extensions"

    - by Ickster
    I'm tired of the multi-step process to set my preferred folder options on every server to which I log on (Mostly Win2008, but also some 2012 and Win7 here and there). I'd love to be able to script the process, but unfortunately, I can't find any commands or extensions to do so for folder options. There are several settings I'd like to change, but in particular, I'd like to set "Hide file extensions for known file types" to false. I figure that if I can do that, I'll be able to manage any additional settings on my own. Methods that work on the vanilla command line would be preferred, but if there are commands in PowerShell, I'll use that.

    Read the article

  • How can I programmatically change the keyboard layout?

    - by Jason R. Coombs
    I want to run a shell command or script that will configure each of my Ubuntu Precise boxes to use the Dvorak keyboard layout as the default (and only) layout. With earlier versions, I was able to set the XKBVARIANT in /etc/default/keyboard but when I make this change in Precise (and reboot), the keyboard layout appears to be unaffected (both in console and in gnome). I tried also setting the XKBMODEL to pc105 and XKBLAYOUT to us, but that did not seem to help. I know I can set the layout for gnome using the 'keyboard layout' tool... but I want the change to affect the console, and I want to automate the process. How can I accomplish this? Edit: To clarify, I want to know how I can cause to change (using only a script or command-line) the keyboard layout to be Dvorak as the default and only keyboard layout for both Gnome and the console. I want this change to be persistent (survive reboots), just as it is when the change is made through the Keyboard Layout tool. Edit: Let me put it another way. If I had installed the operating system myself (which I did not because the OS was installed by the virtual machine infrastructure), I could have selected the desired keyboard layout at install time, and that layout would be applied persistently, system-wide. How can I change the layout to appear as if I had set it during the install process?

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • Oracle E-Business Suite is Helping to Save Lives at the National Marrow Donor Program

    - by Di Seghposs
    To improve the management of its life-saving operations, the National Marrow Donor Program recently modernized its financial and procurement operations by upgrading to Oracle E-Business Suite 12.1.   As the global leader in bone marrow and umbilical cord blood transplants, the NMDP manages a complex ecosystem of donor, patient, hospital, and biological data. “Maintaining accurate data and having an efficient matching process is essential, particularly as our global database of bone marrow patients grows and donor lists expand,” says Bruce Schmaltz, director of finance/controller. “We rely on the Oracle E-Business Suite to ensure our procurement and financial management processes meet the highest standards, enabling our growing non-profit to work swiftly and efficiently to help improve and save lives.” As the non-profit organization and its registry grew larger, NMDP needed a modern platform to store and integrate its financial information and complicated procurement process. It selected Oracle E-Business Suite for its ability to fit seamlessly into NMDP’s enterprise architecture. NMDP initially implemented Oracle E-Business Suite release 12 by leveraging Oracle Business Accelerators, which are rapid implementation tools and templates that help reduce implementation time and costs. With Oracle Financial Management and Oracle Procurement, NMDP has streamlined back-office processes and integrated its procure-to-pay business processes by leveraging industry leading accounts payable, accounts receivable, and general ledger modules. NMDP is currently rolling out Oracle Hyperion Performance Management applications and plans to implement Oracle Order Management and Oracle Advanced Pricing by the end of 2012. Read more details about NMDP’s modernization efforts.  For more updates on Oracle Financial Management Solutions, view our November 2012 Oracle Information InDepth Financial Management newsletter. Subscribe Now. 

    Read the article

  • Distributing application updates using SCCM 2007

    - by theraneman
    Hi all, If there are any System Center Config Manager (SCCM) users out there, please clarify my doubt here. I have used the ConfigMgr console to distribute a custom application to a client machine. Now I need to distribute some updated files of that application. Isnt's it possible to add those files in the same package source used earlier and advertise again? Or should I use the SCCM software update section for this? Not sure if its only me, but the Software distribution process looks much easier than the Software Updates process in SCCM 2007. Please do let me know if there any online tutorials which explain how to update a custom application. Any help much appreciated.

    Read the article

  • SQL Server Migration Assistant for Oracle problem

    - by Paul
    I've recently installed SSMA on my computer and after connecting to both the Oracle instance (which holds the database to be converted) and the SQL Server. I've mapped the needed schemas from oracle to mssql. The problem is that when i click on the report button for the assessment report there's an error popping up: Assesment Error : Nothing to Process The output window states: Starting conversion... Analyzing metadata... Conversion finished with 0 errors, 0 warnings, and 0 informational messages. There is nothing to process. Has anyone got experience with SSMA. I can't figure out what I am doing wrong. Thank you.

    Read the article

  • Is there a way to make scp run faster on a Mac OS X?

    - by paul_sns
    I'm trying to a upload a Flex generated SWF file from my Macbook (running Snow Leopard) using the command scp main.swf server.com:/ I had setup key authentication to prevent typing the user/pass every time. This process normally takes up to two minutes using my connection at home (768kbps down/300+ kbps up). The interesting part is that when I use WinSCP in my Windows XP machine, the process only takes 30 seconds max. Both my MacBook and Windows XP machine use the same internet connection. The MacBook is connected to the router via cable (which should be faster right?) while the Windows XP connects through Wifi. Let me know if you need additional information in order to diagnose the problem. Thanks!

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • CHROOT for shell script testing

    - by Josh
    I am looking at setting up a shell script in order to properly document and automate the process I am using to setup a few servers we have. In order to test the shell script through its different stages I was thinking a CHROOT would be ideal, since I can wipe out the "virtual root" and create it on the fly. I have never used CHROOT before, however. I was just curious what are the exact steps I would need to follow to implement this process of creating a chroot (with the basic core functions that would be needed to install apache/php/etc.)? and then destroying it?

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • How do I find the source of soft page faults?

    - by David Robison
    I have Windows 7 x64 computer that according to Performance Monitor has 70,000 page faults / second when idling. That's seems like a lot to me (every other computer I check has basically 0 page faults / second when idling). If I use Resource Monitor or Process Explorer to check hard faults, I see that they are basically 0. So all the page faults are soft. Normally, soft page faults are not a problem, but I suspect they might be causing issues for this computer given there are so many. I would like to identify what programs are causing the soft faults. Are there any tools that exist the display the number of soft page faults for each process?

    Read the article

  • Oracle is Child&rsquo;s Play&hellip;in NSW

    - by divya.malik
    A few weeks ago, my colleague Michael Seback posted a blog entry on Oracle’s acquisition of Haley.  We recently read  an interesting report from Down Under, and here was our press release on the  implementation of Oracle’s Policy automation software in New South Wales, which I thought I would share. We always love hearing about our software “at work”, and especially in the Public Sector- social services area, where it makes a big difference to people’s lives. Here were some of the reasons, why NSW chose Oracle software: “One of the things Oracle’s Policy Automation system is good at is allowing you take decision  trees and rules that are obviously written in English and code them up using very much a natural language approach,” said Holling (CIO for Human Services). “So it was quite a short process to translate the final set of rules that were written on paper into business rules that were actually embedded in the system.” “Another reason why we chose Oracle’s Automation tool is because with future versions of Siebel it comes very tightly integrated with that. It allows us to then to basically take the results of the Policy Automation survey and actually populate our client management system database with that information,” said Holling. As per Surend Dayal, North America VP, Oracle’s Policy automation has applications across a wide range of industries, including public sector—especially health and human services—also financial services, insurance, and even airline rewards programs. In other words, any business process that requires consistent, accurate decision-making where complex legislation and/or internal policies are involved. Click here to read more about Oracle and Haley.

    Read the article

  • ubuntu Grub boot hangs on external usb drive

    - by schoetbi
    Hi, i just gave xubuntu another try and installed it ordinarily on a external usb harddisk. I have another harddisk installed inside the laptop that has Windows Xp on it. Now the problem: When I boot from the external drive the boot menu of Grub 2 shows up and i see all installed bootable partitions including windows. Now I select Xubuntu and wait. When the Xubuntu Logo shows up the boot process hangs. Now the funny thing. When the logo shows up I can unplug the usb drive and reconnect it real fast. Then the boot process will continue!!! Since I am a Linux newbie I would appreciate every hint that can solve this so that I can enjoy a smooth Linux boot:-) EDIT Grub version is: tobias@ubuntu:~$ grub-install -v grub-install (GRUB) 1.98+20100804-5ubuntu3 Kernel is: tobias@ubuntu:~$ uname -r 2.6.35-23-generic Xubuntu is 9.10 Thanks

    Read the article

  • How to solve: "Connect to host some_hostname port 22: Connection timed out"

    - by Aufwind
    I have two Ubuntu machines. Both have openssh-client and openssh-server installed on them. ssh-ing from machine G (fresh Ubuntu 11.10 installation) to machine K works great. But ssh-ing from machine K to machine G results always in the Error: Connect to host some_hostname port 22: Connection timed out I went through the troubleshooting section of help.ubuntu.com and I got the following results: ps -A | grep sshd # results in 848 ? 00:00:00 sshd - sudo ss -lnp | grep sshd # results in 0 128 :::22 :::* users:(("sshd",848,4)) 0 128 *:22 *:* users:(("sshd",848,3)) - ssh -v localhost # works! - sudo ufw status verbose # yields: "Status: inactive" I haven't change anything in the config file. What can I do to locate the Problem and solve it? Glad about every hint! Edit: ping was succesful in both directions! I did a telnet <machineK> 22 from machin G which resulted in Trying and then in telnet: Unable to connect to remote host: Connection timed out. But telnet the other way around worked just fine! Edit 2: ssh start/running, process 966 # yields: ssh start/running, process 966 /etc/hostname # contains my hostname, let's call it blubb /etc/hosts # contains the following 127.0.0.1 localhost # 127.0.1.1 blubb 129.26.68.74 blubb # I added this! - sudo service ufw status # yields: ufw start/running I installed Gufw and set it to ON. Then I selected from Incoming the option ALLOW. Then I sshed to another machine from where I sshed back to my machine. Still the same error as above: connect to host blubb port 22: Connection timed out Any more hints, what I can check?

    Read the article

  • Seeking a solution to automatically copy files from the cd-rom disk to the USB drive once it's connected.

    - by Ray Nathan
    I plan to distribute a free CD that automatically copies files to a connected usb device. This process will be done on the computers of the users that obtain the cd. The CD will contain an autorun.ini file that will instruct the computer to copy a set of files located on the cd..to a specific directory on the connected usb device. The usb drive letter is not the same on all the systems, therefore...Windows XP should automatically know the drive letter of the usb device before the copy operation begins. What would be the best way of creating a short batch file or script that I can place on the CD to execute this process? Also, please note that it is NOT feasible or recommended to include a batch file on the USB devices to sync this operation due to the explanation at the beginning of this paragraph. :) Thank You All

    Read the article

  • Banshee doesn't like opening websites

    - by Allan
    I have come across two bugs (which will be added to launchpad if it's not resolved here) When I open any of the websites in Banshee Amazon or Miro Guide as soon as the site is finished loading it crashes Banshee. If I play any video local or remote it will show 1 frame maybe 0.5 sec of video then I get a black screen and audio continues in the backgound. Specs & Details I have a Fujitsu Amilo 1718 laptop with 2 gig of ram (original 1 gig) graphics is provided by ATI Radeon Xpress 200M (don't laugh it works with compiz....just) I have a link to the output of banshee --debug Here Don't have time to read? Here are the Highlights [2 Warn 11:52:34.814] Caught an exception - System.ArgumentNullException: Argument cannot be null. then abit later Debug info from gdb: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Operation not permitted. ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. ================================================================= Aborted Not music to my ears as you can expect. The version I am using is 1.9.4 from the daily ppa but these bugs happen in any version of banshee from 1.8.1 and up. So if any one has come across a fix for this problem please share!! additional info Both VLC and Miro work on my system so there isn't a system wide problem with video and I haven't mentioned mono so no trolling it will get voted down.

    Read the article

  • Likeliness of obtaining same IP address after restarting a router

    - by ?affael
    My actual objective is to simulate logged IPs of web-site users who are all assumed to use dynamically assigned IPs. There will be two kinds of users: good users who only change IP when the ISP assignes a new one bad users who will restart their router to obtain a new IP So what I would like to understand is what assignment mechanics are usually at work here deciding from what pool of IPs one is chosen and whether the probability is uniformly distributed. I know there is no definite and global answer as this process can be adjusted be the ISP but maybe there is something like a technological frame and common process that allows some plausible assumptions. UPDATE: A bad user will restart the router as often as possible if necessary. So here the central question is how many IP changes on average are necessary to end up with a previously used IP.

    Read the article

  • mongod fork vs nohup

    - by Daniel Kitachewsky
    I'm currently writing process management software. One package we use is mongo. Is there any difference between launching mongo with mongod --fork --logpath=/my/path/mongo.log and nohup mongod >> /my/path/mongo.log 2>&1 < /dev/null & ? My first thought was that --fork could spawn more processes and/or threads, and I was suggested that --fork could be useful for changing the effective user (downgrading privileges). But we run all under the same user (process manager and mongod), so is there any other difference? Thank you

    Read the article

  • Virtual Pageview Goal Funnel Not Tracking Correctly

    - by cphill
    I have an AJAX form that has three stages: 1. The landing page where a user fills out a form and selects between three question sets and clicks begin assessment 2. The assessment page, where users fill out questions relating to the question set that they selected on the landing page. 3.The results page, which shows whether they are at High Risk or Low Risk. Since this is an AJAX form that does not open a new page for each step of the process, I implemented a virtual pageview that would fire on the pageload of each step of the form process. The following is my virtual pageview setup for each stage: /form/begin-assessment /form/assessment/* (* = Three different virtual pageviews depending on the users selection of the three sets of questions: /one, /two, /three) 3./form/finished-assessment I have set up three separate goals to track user progress through each step of the form assessment. Here is my Goal setup: Goal Description: -Goal Type: Destination Goal Details: -Destination: /form/finished-assessment -Funnel: On Step 1: /form/begin-assessment (Required: Yes) Step 2: /form/assessment/one (Step 2: replace /one with /two or /three and you have my two other goals setup) Now my goals are recording the correct data in the first step and show the completions in the destination, but the second step does not show any drop offs. They show the same data as the destination. Any ideas of how I set up the goals wrong?

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >