Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 253/780 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • synchronization web service methodologies or papers

    - by Grady Player
    I am building a web service (PHP+JSON) to sync with my iphone app. The main goals are: Backup Provide a web view for printing / sorting, manipulating. allow a group sync up and down. I am aware of the logic problems with all of these items, Ie. if one person deletes something, do you persist this change to other users, collisions, etc. I am looking for just any book or scholarly work, or even words of wisdom to address common issues. when to detect changes of data with hashes, vs modified dates, or combination. how do address consolidation of sequential ID's originating on different client nodes (can be sidestepped in my context, but it would be interesting.) dealing with collisions (is there a universally safe way to do so?). general best practices. how to structure the actual data transaction (ask for whole list then detect changes...)

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • login takes long time

    - by Arkaprovo Bhattacharjee
    I am using Ubuntu 12.04 from past 12 days. In the beginning login was fast enough after I put the password it hardly takes 3 to 4 sec to enter in desktop, but now its taking like more that 40 sec to show desktop after entering password. whats the problem, is there any solution? P.S there is only two programs (psensor and jupiter) that starts automatically after login. boot.log fsck from util-linux 2.20.1 /dev/sda6: clean, 254544/3325952 files, 2133831/13285632 blocks * Stopping Userspace bootsplash[164G[ OK ] * Stopping Flush boot log to disk[164G[ OK ] * Starting mDNS/DNS-SD daemon[164G[ OK ] Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox * Starting bluetooth daemon[164G[ OK ] * Starting network connection manager[164G[ OK ] * Starting AppArmor profiles [170G [164G[ OK ] * Stopping System V initialisation compatibility[164G[ OK ] * Starting CUPS printing spooler/server[164G[ OK ] * Starting System V runlevel compatibility[164G[ OK ] * Starting Bumblebee supporting nVidia Optimus cards[164G[ OK ] * Starting LightDM Display Manager[164G[ OK ] * Starting save kernel messages[164G[ OK ] * Starting anac(h)ronistic cron[164G[ OK ] * Starting ACPI daemon[164G[ OK ] * Starting regular background program processing daemon[164G[ OK ] * Starting deferred execution scheduler[164G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting CPU interrupts balancing daemon[164G[ OK ]

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Visual Studio 2012 Installation fails on Windows 7

    - by Vipul
    I am trying to install Visual Studio 2012 on Windows 7 Home Premium 64-bit machine, but installation fails. I tried to install all version (premium, ultimate etc..) but getting below error. Machine is not domain joined and logged in as administrative user. I was using Security Essentials but turn it off before the installation. Installation source is from MSDN. Error log is too big to upload, important portion from the log: [2B6C:2580][2012-09-16T23:06:40]: MUX: ERROR: The type initializer for 'System.Windows.Media.FontFamily' threw an exception.

    Read the article

  • System Center Configuration Manager 2007 - Debugging Client Installs

    - by Dayton Brown
    Hi All: Having an issue installing the CCMsetup client on desktops. The CCMSetup makes it to the PC, files are there, it gets added to the services for automatic start, it starts, but quits almost instantly. Logs on the desktop show a entry like this. <![LOG[Failed to successfully complete HTTP request. (StatusCode at WinHttpQueryHeaders: 404)]LOG]!><time="14:28:51.183+240" date="06-11-2009" component="ccmsetup" context="" type="3" thread="2388" file="ccmsetup.cpp:5808"> What am I missing? EDIT: Firewall is off on both client and server.

    Read the article

  • Visual SVN server Running but cannot access / browse repositories

    - by user1783560
    Operating System: Windows Web Server 2008 R2 Visual SVN Version: 2.5.7 Subversion: 1.7.7 Apache: 2.2.22 I freshly installed the Visual SVN latest version on the server and created one repository in it. In the server management window, it shows that the server is up and running but when I try to browse it in a web browser, it doesn't respond. I am not able to import my existing code into the repository: Error: Cannot connect to server open/browse the repository with either command localhost:81/svn OR http://www.myserver.com:81/svn OR http:// myIPAddress:81/svn Visual SVN log is clean. The last information in the server log is that "The server is listening to port 81.

    Read the article

  • Removing QWERTY Keyboard Layout Permanently

    - by Phoenix
    Following the instructions in this thread, I added the Dvorak layout to the Regional and Language Options control panel, set it as the default keyboard layout and removed the US (QWERTY) layout. However, even though I removed the QWERTY layout, it still appears in my language bar, and my system defaults to it in every new window. This persists after a log-out/log-in and even a system restart. How do I remove the QWERTY keyboard layout from my system permanently? Alternatively (if outright removing QWERTY is just impossible), can I get Windows to default to Dvorak instead of QWERTY for new windows?

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • Targus USB-to-RS232 not working with Linux?

    - by Ethan Leroy
    I have the Targus PA088 USB to RS232 converter, but it seems that it does not work with linux. Its RX and TX lights are flashing, but I can't see the data in minicom/picocom. When using it with Windows and hterm, everything's fine. Any idea what could be the problem? Additional info: When I plug in the adapter, I can see the following messages in /var/log/messages.log Nov 25 01:47:31 localhost kernel: [ 831.787066] usb 2-1.1: new full speed USB device number 5 using ehci_hcd Nov 25 01:47:32 localhost kernel: [ 832.554810] mct_u232 2-1.1:1.0: MCT U232 converter detected Nov 25 01:47:32 localhost kernel: [ 832.555002] usb 2-1.1: MCT U232 converter now attached to ttyUSB0 Nov 25 01:47:32 localhost mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.1" Nov 25 01:47:32 localhost mtp-probe: bus: 2, device: 5 was not an MTP device

    Read the article

  • SqlServer2005 Enterprise Fast Recovery, SqlAgent Availability, and Replication

    - by automatic
    I have a database under SqlServer2005 Enterprise 64bit sp3, that is in phase 3 of 3 of recovery after a reboot without normal shutdown. It looks like with fast Recovery, the database became available when recovery moved into phase 3. However, it seems (based on message in SqlAgent log), that SqlAgent is "started" but not available to run jobs until recovery completes. I have other databases on the same server that are transactional publications. It seems to me that if I let users update the published databases, transactions will start to build up in the log, but won't be moved to the distribution database or onto the subscribers because SqlAgent isn't running jobs. Should I be overly concerned about performing updates before

    Read the article

  • Unity isn't starting on 13.10 (with Cinnamon 2.0 installed)

    - by Sam Pearman
    Since upgrading to 13.10, I can't log in to unity desktop. Light dm works correctly, but attempting to log in tries to start the session then drops back to light. I've already dropped to terminal (ctrl+alt+f2) and done this: sudo apt-get update sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity Logging in as a guest session also fails. Logging in to other window managers works with varying degrees of success. Note: I have Cinnamon 2.0 installed from PPA. I'm using a 2 monitor setup. Also of note is that the session prior to my upgrade to 13.10 the background of unity failed to display at all, instead showing what was there in the screen buffer from the previous frame. The entire OS worked correctly otherwise though, so I just ignored it for the session. No other upgrades or even updates were done prior to this occurring. My upgrade path to 13.10 was basically this: Install 13.04 alongside Windows 7, use ubuntu as a glorified web browser for a while, get updates (in preparation for 13.10), install 13.10. I also used Unity Tweak Tool to change some aspects of unity, particularly auto-hide. Any help or ideas would be appreciated, as I'm typing this on my phone :(

    Read the article

  • mpd conflicting with other applications -- taking control of pulse?

    - by Jamie Schembri
    Simple explanation If mpd is playing and sound attempts to play through another application, x, sound from x will not be output. If sound from another application, x, is playing and mpd then attempts to play, no sound will be output from mpd whilst sound from x continues to play. Details I first noticed this problem with Flash, and this continues to be the most common scenario. I posted a question about this before realising it was not strictly Flash-related, but instead is something to do with mpd. My biggest frustration comes from trying to get mpd working again, as I can't seem to pin down any method. Sometimes pulseaudio -k seems to help, other times sudo /etc/init.d/mpd restart, others killing Chromium (due to Flash) with SIGTERM. Most of the time it's a combination of the above. I think this might be because I run mpd as another user and use pulseaudio. It is not run as root or current user. Also, mpd is compiled with pulse support. I have tried numerous things, however I honestly couldn't recite what, as it has been some time since. I'd rather not go poking around without some direction, but I'd be really happy to fix this problem once and for all. mpd.conf Simplified by removing comments/blank lines. music_directory "/var/lib/mpd/music" playlist_directory "/var/lib/mpd/playlists" db_file "/var/lib/mpd/tag_cache" log_file "/var/log/mpd/mpd.log" pid_file "/var/run/mpd/pid" state_file "/var/lib/mpd/state" user "mpd" bind_to_address "wilson" input { plugin "curl" } audio_output { type "pulse" name "My Pulse Output" } filesystem_charset "UTF-8" id3v1_encoding "UTF-8" Question For the sake of keeping this a question: does anyone know what is causing this, or how to fix it?

    Read the article

  • How to avoid maximum Workgroup Manager connections in Mac OS X Server 10.6

    - by Stephan
    Is there a limit on the Mac OS X Server (10.6) Workgroup Manager in respect to concurrent connections to a server? I have an OS X server up and running and Open Directory configured but I am not able to log in remotely as I get the message the maximum number of connections for Workgroup Manager is already reached and I should wait for a user to disconnect. Even after a restart I get this message remotely. However, locally on the server I can start Workgroup Manager without any issues. It always lets me connect. Any advice what I need to do to make Workgroup Manager work from a remote location? I could not find any max connection setting in Server-Admin and nothing in the slapd log files. The server license says unlimited so I am quite sure it should not be a regular error message that indicates to me I should upgrade.

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • Graphite Running using daemon tools getting defunct

    - by pradeepchhetri
    I am running carbon-cache.py and carbon-aggregator.py using daemon tools. When I made some changes in the storage-schema.conf and tried to restart the carbon-cache.py, I found that it is becoming zombie very frequently. root 3367 3366 0 03:23 pts/1 00:00:00 supervise carbon-aggregator root 3371 3366 0 03:23 pts/1 00:00:00 supervise carbon-cache root 3373 3367 3 03:23 pts/1 00:00:02 /usr/bin/python /usr/bin/carbon-aggregator.py --debug start root 3379 3372 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-cache root 3382 3368 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-aggregator root 3638 3371 21 03:24 pts/1 00:00:00 [carbon-cache.py] <defunct> Can someone tell me what may be the reason ?

    Read the article

  • I can't launch any Win8 apps after upgrading to Windows 8.1

    - by locka
    I just upgraded to 8.1 and now none of the Metro apps start. The issue is that if I start any metro app, including the Store and PC Settings they immediately fail. The classic desktop is fine, as are standard programs, it's just the metro apps. If I look in the system event log I see errors like this: *Activation of application winstore_cw5n1h2txyewy!Windows.Store failed with error: This application does not support the contract specified or is not installed. See the Microsoft-Windows-TWinUI/Operational log for additional information.* In addition the tiles in metro have a small cross icon on them: I suspect that my Live ID (which I somehow managed to skip during update) is not set properly and consequently none of the online stuff works. But how do I fix this? I can't start PC settings, I can't start store. I see no way in the classic desktop of setting these things. I don't want to have to reinstall for this. Is there a simple fix?

    Read the article

  • RSH between servers not working

    - by churnd
    I have two servers: one CentOS 5.8 & one Solaris 10. Both are joined to my workplace AD domain via PBIS-Open. A user will log into the linux server & run an application which issues commands over RSH to the solaris server. Some commands are also run on the linux server, so both are needed. Due to the application these servers are being used for (proprietary GE software), the software on the linux server needs to be able to issue rsh commands to the solaris server on behalf of the user (the user just runs a script & the rest is automatic). However, rsh is not working for the domain users. It does work for a local user, so I believe I have the necessary trust settings between the two servers correct. However, I can rlogin as a domain user from the linux server to the solaris server. SSH works too (how I wish I could use it). Some relevant info: via rlogin: [user@linux~]$ rlogin solaris connect to address 192.168.1.2 port 543: Connection refused Trying krb4 rlogin... connect to address 192.168.1.2 port 543: Connection refused trying normal rlogin (/usr/bin/rlogin) Sun Microsystems Inc. SunOS 5.10 Generic January 2005 solaris% via rsh: [user@linux ~]$ rsh solaris ls connect to address 192.168.1.2 port 544: Connection refused Trying krb4 rsh... connect to address 192.168.1.2 port 544: Connection refused trying normal rsh (/usr/bin/rsh) permission denied. [user@linux ~]$ relevant snippet from /etc/pam.conf on solaris: # # rlogin service (explicit because of pam_rhost_auth) # rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_lsass.so set_default_repository rlogin auth requisite pam_lsass.so smartcard_prompt try_first_pass rlogin auth requisite pam_authtok_get.so.1 try_first_pass rlogin auth sufficient pam_lsass.so try_first_pass rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth required pam_unix_auth.so.1 # # Kerberized rlogin service # krlogin auth required pam_unix_cred.so.1 krlogin auth required pam_krb5.so.1 # # rsh service (explicit because of pam_rhost_auth, # and pam_unix_auth for meaningful pam_setcred) # rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 # # Kerberized rsh service # krsh auth required pam_unix_cred.so.1 krsh auth required pam_krb5.so.1 # I have not really seen anything useful in either system log that seem to be directly related to the failed login attempt. I've tail -f'd /var/adm/messages on solaris & /var/log/messages on linux during the failed attempts & nothing shows up. Maybe I need to be doing something else?

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    Hello. I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • Issue with percona-xtrabackup-2.0.0 hotbackup on MyIsam tables

    - by arn
    I am trying to implement hot backup for MyIsam tables with "percona-xtrabackup-2.0.0" and getting the following error? As the all tables are MyIsam I doubt am I using the correct package ? Backup : ./innobackupex --user="root" --password=<pass> --defaults-file="<path>/my.cnf" --ibbackup="<path>/percona-xtrabackup-2.0.0/bin/xtrabackup" <path>/backup/ innobackupex: fatal error: no 'mysqld' group in MySQL options innobackupex: fatal error: OR no 'datadir' option in group 'mysqld' in MySQL options apply-log : ./innobackupex-1.5.1 --apply-log --defaults-file=<path>/backup/2012-06-02_09-59-30/backup-my.cnf --ibbackup=<path>/percona-xtrabackup-2.0.0/bin/xtrabackup <path>/backup/2012-06-02_09-59-30/

    Read the article

  • Tips and tricks to make NX server more stable

    - by gareth_bowles
    My shop has been using the FreeNX server on Fedora 11 for a while now and mostly getting good results, especially with performance, but we have some annoying problems with client connections. There are two main issues: Client sessions sometimes freeze after a long time (seems to be at least 2 hours of having the session active) We often have to make multiple attempts to start a new client session, especially if a previous session was suspended rather than terminated. In qwuite a few cases, we've had to restart the NX server to get around this. Our NX server configuration is the default except that we've enabled logging level 7 to /var/log/nxserver.log, and set the font server to "unix:/7100" so that it uses xfs. Does anyone have any ideas for making things more stable ?

    Read the article

  • Mounting an encrypted partion Error

    - by indiajoe
    Using the disk utilities in ubuntu 11.04, i had encrypted a partition with a passphrase. Each time i used to click on the partition to mount, it used to ask me the passphrase and get mounted. All was fine, until i installed the 12.04. After the installation, this encrypted partition, disappeared from the menu. fdisk -l /dev/sda Shows the encrypted partition in the list /dev/sda7 298953648 488392064 94719208+ 7 HPFS/NTFS/exFAT I tried the following commands to mount it. But they all gave following errors $ sudo cryptsetup luksDump /dev/sda7 Device /dev/sda7 is not a valid LUKS device. $ ecryptfs-unwrap-passphrase /dev/sda7 Passphrase: # i entered the correct passphrase here... Error: Unwrapping passphrase failed [-5] Info: Check the system log for more information from libecryptfs $ grep ecryptfs /var/log/syslog Oct 31 22:43:51 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading Nov 1 01:28:02 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading Nov 1 01:29:06 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading I don't understand why I am getting the "Device /dev/sda7 is not a valid LUKS device." Could it be due to some corruption in partition table? Is there any way to recover this encrypted partition? Thanks indiajoe

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >