Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 439/1072 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • C++0x error : variable 'std::packaged_task<int> pt1' has initializer but incomplete type

    - by Eternal Learner
    Hi All, Below is a simple program in c++0x that makes use of packaged_task and futures. while compiling the program i get error : variable 'std::packaged_task pt1' has initializer but incomplete type the program is below #include #include using namespace std; int printFn() { for(int i = 0; i < 100; i++) { cout << "thread " << i << endl; } return 1; } int main() { packaged_task<int> pt1(&printFn); future<int> fut = pt1.get_future(); thread t(move(pt1)); t.detach(); int value = fut.get(); return 0; }

    Read the article

  • jQuery.remove() - Is there a way to get the object back after you remove it?

    - by Jack Marchetti
    I basically have the same problem in this questions: Flash Video still playing in hidden div I've used the .remove jquery call and this works. However, I have previous/next buttons when a user scrolls through hidden/non-hidden divs. What I need to know is, once I remove the flash object, is there a way to get it back other than refreshing the page? Basically, can this be handled client side or am I going to need to implement some server side handling. detach() won't work because the flash video continues to play. I can't just hide it because the video continues to play as well.

    Read the article

  • Core i7-620M vs Core i5-540M

    - by Shalan
    Hi, I'm recommending a laptop to a colleague, and the specific laptop he has chosen has the above CPU chips as options. Both chips have 2-cores/4-threads. The i7-620M is a 2.66 GHz (4MB Cache) while the i5-540M is a 2.53 GHz (3MB Cache)....both Arrandale architecture. He is a .NET programmer working with SQL Server and Oracle, and occasionally uses Adobe Fireworks for web-related design elements. He also loves playing around in Adobe Premiere Pro, and does a lot of media/video work. Would you notice any significant performance difference between the 2? The laptop manufacturer claims that the battery life on both is the same irrespective of the chip used (although I find that hard to believe), but there is a major cost difference between them, with the Core i7-620M being the more expensive. According to http://ark.intel.com, the one thing that seems different (besides the obvious speeds/frequencies/etc) is a feature called "Embedded" - what is this exactly? You can see the quick comparison here - http://ark.intel.com/Compare.aspx?ids=43544,43560 I would sincerely appreciate any advice me on this. THANKS!

    Read the article

  • Using unixODBC to connect to Oracle server

    - by Paul
    I am trying to configure our web server (RHEL 5.4 x86) to connect to an Oracle database using unixODBC. I have installed unixODBC-2.2.11-7.1.1, which yum tells me is the latest version. I have also installed the Oracle InstantClient 11.2 and the Oracle InstantClient ODBC library. I have symlinked the all the .so files in /usr/lib/oracle/11.2/client/lib to /usr/lib. I have set $LD_LIBRARY_PATH to /usr/lib/, $ORACLE_HOME to /usr/lib/oracle and $TNS_ADMIN to the directory containing my (valid) Tnsnames.ora file. Here are the contents of my /etc/odbcinst.ini file: [Oracle] Description = Oracle ODBC Connection Driver = /usr/lib/libsqora.so.11.1 Setup = FileUsage = and my /etc/odbc.ini file: [Oracle] Application Attributes = T Attributes = W BatchAutocommitMode = IfAllSuccessful CloseCursor = F DisableDPM = F DisableMTS = T Driver = Oracle EXECSchemaOpt = EXECSyntax = T Failover = T FailoverDelay = 10 FailoverRetryCount = 10 FetchBufferSize = 64000 ForceWCHAR = F Lobs = T Longs = T MetadataIdDefault = F QueryTimeout = T ResultSets = T ServerName = //<host>:<port>/<db> SQLGetData extensions = F Translation DLL = Translation Option = 0 UserID = (ServerName has been edited...host, port, and db are actually there, and correct) When I run isql I get $ isql -v Oracle isql: symbol lookup error: /usr/lib/libsqora.so.11.1: undefined symbol: SQLGetPrivateProfileStringW And running dltest gives me $ dltest Oracle SQLConnect [dltest] ERROR dlopen: Oracle: cannot open shared object file: No such file or directory If anyone has any insights I would be grateful, I've been trying to get this to connect for about 5 hours now... I am going home for the night, but will gladly provide more details, if necessary, tomorrow morning, to anyone willing to help...

    Read the article

  • Linked server problem on SQL Server 2005

    - by BradyKelly
    I have a weird issue and I hope someone can steer me in the right direction for resolving this please. When I execute the following query against a linked server, I get the following error. I can connect to the server in SSMS as a separate server, and execute a similar query against its Deposits table. The nn.nn is my own replacement to avoid broadcasting our server addresses. The query: select td.Batch , td.DateTimeDeposited from Deposits cd left join [172.nn.nn.32\sqlexpress].Terminal.dbo.Deposits td on cd.DateTimeDeposited = td.DateTimeDeposited The error: OLE DB provider "SQLNCLI" for linked server "172.nn.nn.11\sqlexpress" returned message "Login timeout expired". OLE DB provider "SQLNCLI" for linked server "172.nn.nn.11\sqlexpress" returned message "An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.". Msg 65535, Level 16, State 1, Line 0 SQL Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. Notice how the error is about server 172.nn.nn.11 and not 172.nn.nn.32. SOLVED (STUPID ME): Somebody had added an extra bit to my query that was scrolled off-screen and was querying the 17.nn.nn.11 server.

    Read the article

  • Managing scalability and availability with two servers running Apache Httpd, Apache Mina and MySQL

    - by celalo
    Hello, I am not a developer and I don't much experience with scalable server architectures. But I am in need of a highly available and scalable system for one of my projects. There is going to be two servers I am going to use for the time being. Both with 4 core CPUs and 8 GB RAM with RAID structures running CentOS 5.4. I will also have feature called "Failover IP" which enables to direct an IP address to another server within short time. The applications which will be run on the servers: There is going to be a Java application based on Apache Mina server for handling TCP requests from some hundreds of network devices where the devices are going to send request as much as one request per minute. Handling those requests, includes parsing the requests and inserting a few rows to the Database. Parsing requests before inserting data to the DB does take neglectable time. There is going to be MySQL server, as I stated above. Also there is going to be a PHP web application running on Apache Httpd Server which uses the same DB with the Java application. What I wish to have is to make use of those two servers at the most. I was imagining to have the servers identical, sharing the work load. MySQL could be a cluster maybe? And if some application fails or the whole machine goes down, the other will continue serving the requests seamlessly. Reminding that a "Failover IP" feature will be available for me to take advantage of. Also, It should be kept in mind that number of servers could increase in time, to meet the demand. What can you suggest? Which kind of tools I can make use of? Which kind of monitoring software (paid/unpaid) I have? Thanks in advance.

    Read the article

  • extreme slowness with a remote database in Drupal

    - by ceejayoz
    We're attempting to scale our Drupal installations up and have decided on some dedicated MySQL boxes. Unfortunately, we're running into extreme slowness when we attempt to use the remote DB - page load times go from ~200 milliseconds to 5-10 seconds. Latency between the servers is minimal - a tenth or two of a millisecond. PING 10.37.66.175 (10.37.66.175) 56(84) bytes of data. 64 bytes from 10.37.66.175: icmp_seq=1 ttl=64 time=0.145 ms 64 bytes from 10.37.66.175: icmp_seq=2 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=4 ttl=64 time=0.144 ms 64 bytes from 10.37.66.175: icmp_seq=5 ttl=64 time=0.121 ms 64 bytes from 10.37.66.175: icmp_seq=6 ttl=64 time=0.122 ms 64 bytes from 10.37.66.175: icmp_seq=7 ttl=64 time=0.163 ms 64 bytes from 10.37.66.175: icmp_seq=8 ttl=64 time=0.115 ms 64 bytes from 10.37.66.175: icmp_seq=9 ttl=64 time=0.484 ms 64 bytes from 10.37.66.175: icmp_seq=10 ttl=64 time=0.156 ms --- 10.37.66.175 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.115/0.176/0.484/0.104 ms Drupal's devel.module timers show the database queries aren't running any slower on the remote DB - about 150 microseconds whether it's the local or the remote server. Profiling with XHProf shows PHP execution times that aren't out of whack, either. Number of queries doesn't seem to make a difference - we seem the same 5-10 second delay whether a page has 12 queries or 250. Any suggestions about where I should start troubleshooting here? I'm quite confused.

    Read the article

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • OTRS upgrade 3.0 to 3.1 fails

    - by Valentin0S
    Today I've started upgrading OTRS from version 2.3 to 2.4 , 2.4 to 3.0 and 3.0 to 3.1. Everything went smoothly except the upgrade from 3.0 to 3.1 OTRS provides a few perl scripts which make the upgrade easier. I've used these scripts for each upgrade step. The upgrade from 3.0 to 3.1 fails at the following after using the upgrade script. scripts/DBUpdate-to-3.1.pl The error is : root@tickets:/opt/otrs# su - otrs $ scripts/DBUpdate-to-3.1.pl Migration started... Step 1 of 24: Refresh configuration cache... If you see warnings about 'Subroutine Load redefined', that's fine, no need to worry! Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAAuto.pm line 5. Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAuto.pm line 4. done. Step 2 of 24: Check framework version... done. Step 3 of 24: Creating DynamicField tables (if necessary)... done. DBD::mysql::db do failed: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)) at /opt/otrs-3.1.10/Kernel/System/DB.pm line 478. ERROR: OTRS-DBUpdate-to-3.1-10 Perl: 5.14.2 OS: linux Time: Wed Sep 5 15:36:20 2012 Message: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)), SQL: 'INSERT INTO dynamic_field (name, label, field_order, field_type, object_type, config, valid_id, create_time, create_by, change_time, change_by) VALUES (?, ?, ?, 'Text', 'Ticket', '--- {} ', 1, '2012-09-05 15:36:20' , 1, '2012-09-05 15:36:20' , 1)' Traceback (20405): Module: main::_DynamicFieldCreation (v1.85) Line: 466 Module: scripts/DBUpdate-to-3.1.pl (v1.85) Line: 95 Could not create new DynamicField TicketFreeKey1 at scripts/DBUpdate-to-3.1.pl line 477. Step 4 of 24: Create new dynamic fields for free fields (text, key, date)... $ Did anyone else face the same issue? Thanks in advance

    Read the article

  • How to install PHP5.3 and SQLite3 on Ubuntu 8.04

    - by richard
    Hello, I got a Ubuntu Hardy VPS and I am trying to install PHP5.3 with SQLite. I added the dotdeb PHP5.3 repository and succeeded in installing PHP5.3. But I need to install SQLite as well. When I'm trying to install php5-sqlite3 (sudo aptitude install php5-sqlite3) this is the output: The following packages are BROKEN: php5-sqlite3 The following NEW packages will be automatically installed: php-db php-pear php-sqlite3 The following NEW packages will be installed: php-db php-pear php-sqlite3 0 packages upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 460kB of archives. After unpacking 3027kB will be used. The following packages have unmet dependencies: php5-sqlite3: Depends: phpapi-20060613 which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Remove the following packages: libapache2-mod-php5 php5 php5-mysql Install the following packages: php-pear [5.2.4-2ubuntu5.10 (hardy-updates, hardy-security)] Downgrade the following packages: php5-cli [5.3.1-0.dotdeb.1 (<NULL>, now) -> 5.2.4-2ubuntu5.10 (hardy-updates, hardy-security)] php5-common [5.3.1-0.dotdeb.1 (<NULL>, now) -> 5.2.4-2ubuntu5.10 (hardy-updates, hardy-security)] php5-suhosin [5.3.1-0.dotdeb.1 (<NULL>, now) -> 0.9.22-1 (hardy)] Score is 197 Accept this solution? [Y/n/q/?] Obviously, downgrading PHP is not an option. Please help me! If upgrading the server to a newer release of Ubuntu makes things easier, that's not a problem.

    Read the article

  • How to Apache SSL proxy to openerp 7 running in VM?

    - by Johnbritto
    I have installed openerp v7 in an ubuntu 12.04 Virtual machine from launchpad.i.e server, web, addons. I configured SSL reverse proxy on virtual machine and my configuration for virtual host *:443 are ServerName openerp.mydomain.net ServerAdmin openerp@localhost SSLEngine on SSLCertificateFile /etc/ssl/openerp/server.crt SSLCertificateKeyFile /etc/ssl/openerp/server.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://172.16.150.14:8069/ ProxyPassReverse / http://172.16.150.14:8069/ RequestHeader set "X-Forwarded-Proto" "https" # Fix IE problem (httpapache proxy dav error 408/409) SetEnv proxy-nokeepalive 1 </VirtualHost> on host, I have configured apache reverse proxy for my subdomain in vhost_ssl.conf as SSLEngine On SSLProxyEngine On ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://172.16.150.14/ ProxyPassReverse / https://172.16.150.14/ SetEnv proxy-nokeepalive 1 <Location /> Order allow,deny Allow from all </Location> I have set 172.16.150.14 on netrpc and xmlrcs interfaces in openerp-server.conf. Now, when I access https:// openerp.mydomain.net from Girefox and chrome browser..I get http:// openerp.mydomain.net%2C%20openerp.mydomain.net/?db=testingdb which makes 404. But when i access URL from IE 9, the URL https:// openerp.mydomain.net works ok .. secondly if i change the parameter list_db= false, then the links works as expected.. Kindly let me know what is creating bottleneck with URL redirect to http://openerp.mydomain.net, openerp.myydomain.net/?db=testdb on Firefox and chrome. i am struck here doing troubleshooting with the URL to work.

    Read the article

  • What is there in Win 7 Pro (or Ultimate) that is not there in Home Premium? - Especially considering this situation..

    - by Senthil
    I want to know the REAL difference between Windows 7 Home Premium and Professional/Utimate. In India, the cost of different versions: Ultimate - 11,200 INR Professional - 10,700 INR Home Premium - 6,600 INR The absolute cost of the first two is so high to me that the difference (500 INR) doesn't matter. So to me there is really no choice between the first two - If I decide to buy the Professional version, I'd rather go for Ultimate itself. What I want to know is, whether Home Premium is enough for my needs. I tried searching for comparison but many look like just marketing junk from MS. They are short and vague. According to this page, the major differences between Pro and HomePremium are Run many Windows XP productivity programs in Windows XP Mode. Connect to company networks easily and more securely with Domain Join. You can do both in Pro but not in Home Premium. I intend to use my Windows 7 for a small business - just starting up. So I'll be dealing with the following: All kinds of development tools, servers Very important - I will run Virtual Machine Software (MS VPC or VMWare or Sun VirtualBox etc..) My system will be acting as the server for most purposes till I can afford dedicated servers. Connect the system to a variety of network devices (PCs, Printers, etc..) Run productivity, business and financial apps Any other small software startup business requirement that I haven't thought of yet. Professional (and Ultimate) is twice as expensive as Home Premium. So it'd be great if someone can point out the things you cannot do with Home Premium, when you use it like I explained above, so that I can make a decision about which one to buy. I need some real-life experiences so that I can make an informed decision - not a decision based on marketing junk.

    Read the article

  • Explain why folder's permissions differ depending on HOW user is accessing server AFP vs SSH

    - by Meltemi
    Hoping someone can explain what is probably fairly obvious...but confuses me. Imagine two users with admin privileges on our server (Mac OS X Server 10.5). Call them joe & bob. both users are members of these groups: Staff Group ID: 20 Workgroup Group ID: 1025 Shared folder "devfolder" has sharing set as so: POSIX: Owner: joe read & write Group: admin read & write Other no access ACL: Workgroup Allow Read & write Question is why when looking at same folder does the ownership appear to change depending on who's doing the looking?!? Both looking at same folder on the server: From Joe's perspective: xserve:devfolder joe$ ls -l drwxrwxr-x 6 joe workgroup 204 May 20 19:32 app drwxrwxr-x 9 joe workgroup 306 May 20 19:32 config drwxrwxr-x 3 joe workgroup 102 May 20 19:32 db drwxrwxr-x 3 joe workgroup 102 May 20 19:32 doc drwxrwxr-x 3 joe workgroup 102 May 20 19:32 lib And from Bob's perspective (folder mounted on his machine via AFP): bobmac:devfolder bob$ ls -l drwxrwxr-x 6 bob _bob 264 May 20 19:32 app drwxrwxr-x 9 bob _bob 264 May 20 19:32 config drwxrwxr-x 3 bob _bob 264 May 20 19:32 db drwxrwxr-x 3 bob _bob 264 May 20 19:32 doc drwxrwxr-x 3 bob _bob 264 May 20 19:32 lib Now if Bob connects to server via SSH then his output is identical to Joe's, as expected. Can anyone tell me what the client is doing in this case and what should be expected when bob creates or updates files in this folder? What tools do I have to better understand this from the command line? Is this normal? Perhaps a "cleaner" way that wouldn't be confusing with "bob _bob"?!?

    Read the article

  • Spreadsheet RDBMS

    - by John Nilsson
    I'm looking for a software (or set of software) that will let me combine spreadsheet and database workflows. Data entry in spreadsheet to enable simple entry from clipboard, analysis based on joins, unions and aggregates and pivot/data pilot summaries. So far I've only found either spreadsheets OR db applications but no good combination. OO base with calc for tables doesn't support aggregates f.ex. Google Spreadsheet + Visualizaion API doesn't support unions or joins, zoho db doesn't let me paste from clipboard. Any hints on software that could be used? Basically I'm trying to do some analysis of my personal bank transactions. Problem 1, ETL. The data has to be moved from my bank to a database. My current solution is to manually copy and paste the data into one spread sheet per account from my internet bank. Pains: Not very scriptable. Lots of scrolling to reach the point to paste. Have to apply sorting and formatting to the pasted data each time. Problem 2, analysis. I then want to aggregate the different accounts in one sweep to track transfers per type of transfer over all accounts. The actual aggregation is still unsolved because I can't find a UNION equivalent in the spreadsheets I've tried.

    Read the article

  • Uploadify Flash Uploader and Random UPLOAD_ERR_CANT_WRITE errors

    - by dcneiner
    I am using Uploadify to provide progress bar support for file uploads on a PHP app I built. It works perfectly for a few uploads,then every few uploads it fails and the data from the $_FILES array reveals an UPLOAD_ERR_CANT_WRITE error. (Error code 7). I ran Paros proxy between my browser and the server to see the difference between a passing and failing request. The only difference was the content separator for the multi-part post which changes every time. I would conclude this was fully a server error, except with a plain jane form, I cannot reproduce the error. I am not a server guy, so please let me know what information is needed to troubleshoot this and I will update the question with those details. I did place these lines in the .htaccess, but to know avail. The site is hosted on Rackspace Cloudsites so my configuration options are limited: <IfModule mod_security.c> SecFilterEngine Off SecFilterScanPOST Off </IfModule> php_value upload_max_filesize 10M php_value post_max_size 10M php_value max_execution_time 200 php_value max_input_time 200

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

  • Citrix client slow to launch

    - by user706837
    Was wondering if anyone else experience Citrix client to launch very slowly. While I'm a Windows SA by trade, I consider myself Novice+ on Linux, but I doubt thats the problem. This is the simple scenario: 1. Login to Citrix server to work from home 2. Click on the published application; this typically starts the local Citrix client. 3. Citrix client should start and you're off. Problem is between #2 and #3 I click on the application and 8 out of 9 times there is a 60 second delay and then I get an SSL connection error. I suspect this error is misleading since the connection took too long to open. But I dont know how to prove it (or fix it). I'm able to successfully manually launch wfcmgr without errors; so this leads me to believe Citrix client is installed correctly. I even leave it running thinking this may help, but I don't see a difference with or without this running first. The only times I'm able to connect successfully is when the Citrix client starts up a few seconds after clicking on the application. I've searched online for articles that might help, but tried a number of fixes without much difference. Even tried "ln -sf /dev/urandom /dev/random" as suggested by this article, but no dice:http://forums.citrix.com/message.jspa?messageID=1381276 My System (specs that may be relevant) Sony VAIO Laptop VGN-NW270F Linux Mint 11.04 Problem using: FireFox and Chrome Any help would be appreciated. Just trying to either find an answer or guidance on how to determine why its taking so long to launch the Citrix Client. Thanks

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • Puppet classes out of order despite explicit arrow operator use

    - by Alexandr Kurilin
    Absolute puppet beginner here. I'm experiencing an interesting behavior with my puppet manifests and would love to know what I'm doing wrong. Let's for example say I'm configuring the instance with the following ordered classes: class { 'update_system': } -> class { 'facter': } -> class { 'user_sshkey': user => 'ubuntu', type => 'rsa', } -> class { 'tmux': user => 'ubuntu', } -> class { 'vim': user => 'ubuntu', } -> class { 'bashrc': user => 'ubuntu' } -> notify {"Configuring DB role":} -> class { 'postgresql': } when I run the manifest with the --debug switch, by looking at notify statements I can see the classes be executed in the following order: 1. update_system starts 2. a cron type inside of postgresql class (the very **last** class in that ordered list above) is executed 3. postgres::install starts 5. facter starts installing 6. postgres::configure and postgres::service start 7. the vim class is executed 8. "Configuring DB role" notification is made. All the way at the end here. etc Basically the thing is all over the place, the order doesn't seem to follow the arrow operators in any way. I'm guessing I'm missing something here that would force the classes to execute one at a time. Could it be that I'm missing some kind of anchor pattern here? Invalid containment?

    Read the article

  • Latency between IIS and SQL on same physical, two VMs

    - by Jerad Rose
    I have a single server (2x4 core CPUs, 32GB ram), that is a Windows Server 2012 Hyper V host, and it hosts two guest VMs (also Windows Server 2012 instances). One of them is a web server, the other is a SQL server. When hitting a page that loops over 50 records, there is noticeable latency. I capture/report the timings of each iteration on the loop, and each iteration is about 20-30 milliseconds. Of course, this amounts to over a second of latency for the whole loop. I thought maybe SQL needed to be tuned, but running profiler on it, the queries are showing almost 0 duration, so it seems the bottleneck is in transit between the two VMs. I have both VMs configured to use the actual NIC (vs. using a VNIC), so maybe that's part of my problem. Also, this is a classic ASP site, so it's using the SQL OLE DB provider, and I'm wondering if that is part of the problem. This is a new server setup, from an existing Windows 2003/IIS6 server setup where both web and DB run on the same server instance (no virtualization). On that setup, there is no such latency when looping over the cursor like this. But there are so many variables, I'm not sure where to start ruling things out.

    Read the article

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >