Search Results

Search found 13889 results on 556 pages for 'results'.

Page 262/556 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • cannot mount remote partition using fstab/fuse

    - by HorusKol
    Using a combination of http://ubuntu-tutorials.com/2007/01/02/mount-remote-directories-securely-with-ssh-ubuntu-6061-610/ and http://www.tuxfiles.org/linuxhelp/fstab.html I figured I could mount the root of another computer to somewhere on my new laptop to make it easier to transfer files and stuff. Now, I can connect through SSH and browser the files through an ad-hoc mount - but I would like to be able to do this automatically, and so had a look at fstab. my new entry in fstab is: remote_comp:/ /var/remote_comp fuse defaults 0 0 but testing with mount -a results in the following error: /bin/sh: remote_comp:/: not found I thought the problem was because I was trying to mount the root of the other computer, but even trying sub-directories result in the same error message.

    Read the article

  • Optimizing MySQL -

    - by Josh
    I've been researching how to optimize MySQL a bit, but I still have a few questions. MySQL Primer Results http://pastie.org/private/lzjukl8wacxfjbjhge6vw Based on this, the first problem seems to be that the max_connections limit is too low. I had a similar problem with Apache initially, the max connection limit was set to 100, and the web server would frequently lock up and take an excruciatingly long time to deliver pages. Raising the connection limit to 512 fixed this issue, and I read that raising the connection limit on MySQL to match this was considered good practice. Being that MySQL has actually been "locking up" recently as well (connections have been refused entirely for a few minutes at a time at random intervals) I'm assuming this is the main cause of the issue. However, as far as table cache goes, I'm not sure what I should set this as. I've read that setting this too high can hinder performance further, so should I raise this to right around 551, 560, 600, or do something else? Lastly, as far as raising the join_buffer_size value goes, this doesn't even seem to be included in Debian's my.cnf file by default. Assuming there's not much I can do about adding indexes, should I look into raising this? Any suggested values? Any suggestions in general here would be appreciated as well. Edit: Here's the number of open tables the MySQL server is reporting. I believe this value is related to my question (Opened_tables: 22574)

    Read the article

  • Issuse with multiple student result printing request

    - by dotman14
    I'm not really sure about how to ask this question but i'll try my best to make it clear enough. I have a Student Result Application where by students result are managed, over several academic sessions, all having two semesters each. During each semester students take different courses and have results based on the semester. The application is done now and i'm using a PDF Libaray to crop the final result page to hand over to the students each semester. If a student request a particular semester result it's a straight forward issue and there are no complications when it comes to printing out the result. My issue is this: in a case where a student requests to have a combination of semesters...say 3rd year rain semester , 4th year rain semester and 5th year hamattarn semester. Please how can i handle this issue? Does the user picks these options at the user interface level or there's a special way to handle issues like this? Also, if i'm to display these multiple student result how could this be be done, knowing fully well that i'll have to print the different result seperately. Hopefully i've being able to make my situation clear enough. Thanks for your time and patience. Expecting your comments and answers. Thanks.

    Read the article

  • Page_BlockSubmit - reset it to False, if there is a scenario when page doesn't postback on validation error

    - by Vipin
    Recently, I was facing a problem where if there was a validation error, and if I changed the state of checkbox it won't postback on first attempt. But when I uncheck and check again , it postbacks on second attempt...this is some quirky behaviour in .ASP.Net platform. The solution was to reset Page_BlockSubmit flag to false and it works fine. The following explanation is from http://lionsden.co.il/codeden/?p=137&cpage=1#comment-143   Submit button on the page is a member of vgMain, so automatically it will only run the validation on that group. A solution is needed that will run validation on multiple groups and block the postback if needed. Solution Include the following function on the page: function DoValidation() { //validate the primary group var validated = Page_ClientValidate('vgPrimary ');   //if it is valid if (validated) { //valid the main group validated = Page_ClientValidate('vgMain'); }   //remove the flag to block the submit if it was raised Page_BlockSubmit = false;   //return the results return validated; } Call the above function from the submit button’s OnClientClick event. <asp:Button runat="server" ID="btnSubmit" CausesValidation="true" ValidationGroup="vgMain" Text="Next" OnClick="btnSubmit_Click" OnClientClick="return DoValidation();" /> What is Page_BlockSubmit When the user clicks on a button causing a full post back, after running Page_ClientValidate ASP.NET runs another built in function ValidatorCommonOnSubmit. Within Page_ClientValidate, Page_BlockSubmit is set based on the validation. The postback is then blocked in ValidatorCommonOnSubmit if Page_BlockSubmit is true. No matter what, at the end of the function Page_BlockSubmit is always reset back to false. If a page does a partial postback without running any validation and Page_BlockSubmit has not been reset to false, the partial postback will be blocked. In essence the above function, RunValidation, acts similar to ValidatorCommonOnSubmit. It runs the validation and then returns false to block the postback if needed. Since the built in postback is never run, we need to reset Page_BlockSubmit manually before returning the validation result.

    Read the article

  • Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

    - by Giri Mandalika
    Oracle Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com. The Oracle Optimized Solution for Oracle E-Business Suite This solution uses the SPARC SuperCluster T4-4, Oracle’s first multi-purpose engineered system.  Download the free business and technical white papers which provide significant relevant information and resources.  What is an Optimized Solution? Oracle Optimized Solutions are fully documented architectures that have been thoroughly tested, tuned and optimized for performance and availability across the entire stack on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations. Oracle E-Business Suite R12 and Oracle Database 11g Multiple Oracle E-Business Suite  application modules were tested in this Oracle Optimized Solution -- Financials (online - Oracle Forms & Web requests), Order Management (online - Oracle Forms & Web requests) and HRMS (online - Web requests & payroll batch). Oracle Solaris Cluster and Oracle Real Application Cluster deliver the the high availability on this solution.  To understand the behavior of the architecture under peak load conditions, determine optimum utilization, verify the scalability of the solution and exercise high availability features, Oracle engineers tested the Oracle E-Business Suite and Oracle Database all running on a SPARC SuperCluster T4-4 engineered system. The test results are documented in the Oracle Optimized Solution white papers to provide general guidance for real world deployments.  Questions & Requests For more information, visit Oracle Optimized Solution for Oracle E-Business Suite page. If you are at a point where you would like to actually test a specific Oracle E-Business Suite application module on SPARC T4 systems or an engineered system such as SPARC SuperCluster, please contact Oracle Solution Center.

    Read the article

  • New Outlook 2003 message, cursor sometimes goes to body, sometimes goes to "To:" field

    - by normalocity
    I've got an Outlook 2003 client that, when you click on "New message", about half the time the cursor defaults to being in the "body" of the message, and the other half of the time it defaults to the cursor being in the "To:" field. Anyone know why this might be happening? Thought it might be related to having Word set, or not set, to be the default email editor, but that had no effect. Also, this particular user reports that, on their previous machine, it always defaults to the "To:" field. I happen to still have that machine around, unmodified from when it was removed from the, and they are correct - it never goes to the body. I also read that some people had this issue and turned off the "Outlook today" feature to fix it, with mixed results. However, in this case the "Outlook today" feature isn't even turned on.

    Read the article

  • Premature end of script headers

    - by Tony
    I often get a "premature end of script headers" error in my apache log which results to an internal 500 error. I understand what the error message means - that my application did not give the browser the headers it needs (and maybe nothing at all), but the odd thing is that this does not happen all the time. It actually usually happens the first few times I go to my website after a deploy. Could this be a memory issue? Does anyone know how to trouble shoot this? My apache log isn't really telling me anything. I am running a ruby site using the rails framework on ubuntu hardy. thank you!

    Read the article

  • Double Filter in Excel

    - by Joe
    I'm trying to "stack" filters in excel, so to speak. I want to filter column A to show anything greater than 30 and then I want to filter column B to show the top ten items. When I do this, however, it shows me all rows that fit both criteria (only five records). I want to first fit the criteria for column A and then filter these results to show the top ten items in column B (10 records total). I know that I could just copy the rows from my first filter to a new sheet and then filter the new worksheet, but is there any way to apply both filters so that I don't physically have to delete records this way? Thanks for your help!

    Read the article

  • Windows 7 Stopped Using hosts file for DNS Resolution

    - by AJ
    I am running Windows 7 Home Premium 64-bit. Starting today, I noticed that DNS resolution is not reading my %SYSTEMROOT%\System32\drivers\etc\hosts file. I say this because I added two new entries to the file and when I run 'nslookup' on the command line, they don't resolve. Further, just trying to resolve 'localhost' results in my primary DNS server being queried. I've read several threads that suggest that the file might have been corrupted and to move it aside and create a new one. I've done that, and no improvement. Is there some sort of registry key that controls the sequence of resources used for DNS resolution (similar to nsswitch.conf on UNIX)? What else could be causing this? Thanks in advance.

    Read the article

  • Reflector Pro has now been released!

    - by CliveT
    After moving into the .NET division in May , and having a great time working on Reflector, I'm pleased to say that the results of that work are now available. Reflector Pro has now been released! The old Reflector as you know and love it is still available free of charge, and as part of this project we've fixed a number of bugs in the de-compilation that have been around for a long time. The Pro version comes as an add-in for Visual Studio - this offers dynamic de-compilation and generation of pdb files which allow you to step into the de-compiled code. Alex has some good pictures of this functionality on his beta post from around a month ago. Thanks to the other guys who've worked on this for taking me along for the ride - Alex, Andrew, Bart and Jason. Stephen did some great usability work, Chris Alford did some great technical authoring and Laila handled the launch publicity. Like all projects, there's always more I'd like to have done, but what we have looks like a pretty powerful addition to the developer's set of tools to me. Please try it and give us feedback on the forum.

    Read the article

  • vqadmin "invalid language file" with google chrome

    - by MrStatic
    We have been running our qmail setup for awhile with no issues. One of our admins has moved to Google Chrome as his main browser and we have noticed something odd. No matter what, when he loads the vqadmin page it errors on him with simply invalid language file. Yet if he loads Firefox, Opera, Safari or shudders IE8 it works fine. Searching google only results in 'Use IE' or 'Set english as your language in the browser'. A: I try to have people avoid IE if possible and B: There is no english option in Chrome.

    Read the article

  • Network really slow with TL-WN951N wireless card

    - by Sam
    I literally just installed ubuntu and it seems to be working great except the network is deadly slow. I'm running a TL-WN951N wireless card which can download at about 600-700 KB/s in windows but in Ubuntu the max speed it seems to get is around 5KB/s. I guess I should note that my WAP is only wireless-G but like I said, I can get much better speeds in Windows. I'm testing the speeds by downloading files from here: http://mirror.internode.on.net/pub/test/ Anyone have any idea? I saw some people recommend downloading drivers and compiling them myself but I'm really really new to all of this so would appreciate someone babying me through it so I don't brick my computer! Here are the results of lspci -v: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) Subsystem: Giga-byte Technology GA-EP45-DS5 Motherboard Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at c000 [size=256] Memory at e9110000 (64-bit, prefetchable) [size=4K] Memory at e9100000 (64-bit, prefetchable) [size=64K] [virtual] Expansion ROM at e9120000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 05:02.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: Atheros Communications Inc. Device 3071 Flags: bus master, 66MHz, medium devsel, latency 168, IRQ 18 Memory at e9200000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • pdflatex reads .eps files saved in OS/X, but not in Ubuntu

    - by David B Borenstein
    Sorry if this is a stupid question; I'm a newbie. I am preparing a manuscript in LaTeX. The journal (Physical Biology, an IOP publication) requires that figures be saved in .eps format, so I am trying to do that. However, I cannot get my LaTeX file to build when I have generated the .eps files on my Ubuntu computer. If I save the images on my Mac, the file build just fine. So far, I have tried saving images in ImageJ, FIJI and Inkscape. The same problem occurs in all three. When using kile, I get the following error: /usr/share/texmf-texlive/tex/latex/oberdiek/epstopdf-base.sty:0: Shell escape feature is not enabled. In TexWorks, the error is different, but still there: Package pdftex.def Error: File `./figures4/figure4a-eps-converted-to.pdf' not found. Now, if I fire up Inkscape, FIJI or ImageJ on OS/X, everything works fine. The Mac also can't build with the Ubuntu-saved images. The images generated on the Ubuntu machine open fine using Document Viewer. I am building the same LaTeX file on both computers, with the exact same results. The header of my LaTeX file is: \documentclass[12pt]{iopart} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{parskip} \usepackage{color} \usepackage{iopams} And then the code for the figure is: \begin{figure} \center{\includegraphics[width=4in] {./figures4/figure4a.eps}} \footnotesize{\caption{ \label{fig:4a} (4a) lorem ipsum dolor sic amet.}} \end{figure} I'd be happy to send an example of both .eps files. Again, sorry if this is a dumb question. I tried everything I could think of before posting here. Thanks, David

    Read the article

  • Data Networks Visualized via Light Paintings [Video]

    - by ETC
    All around you are wireless data networks: cellular networks, Wi-Fi networks, a world of wireless communication. Check out this awesome video of network signals mapped over a cityscape. What would happen if you made a device that allowed you to map signal strength onto film? In the following video electronics tinkerers craft an LED meter and use it to paint onto long exposure photographs with phenomenal results. Immaterials: light painting Wi-Fi [via Make] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • Why isn't sox able to convert to mp3?

    - by marue
    I installed Sox, i installed lame-398, but sox is not able to convert any file to mp3. It fails with the messages: ./../sox FAIL util: Unable to load LAME encoder library (libmp3lame). ./../sox FAIL formats: can't open output file `funktech.mp3': How can i check if lame has been installed correct? How can i get sox to find the mp3Library? edit: I did not install sox at all, it works without installing directly from the commandline. Lame was installed by following the instructions on their site: ./configure make make install which results in the following files being found in /usr/local/lib/ : libmp3lame.dylib, libmp3lame.la, libmp3lame.a Maybe symlinking libmp3lame.la, which is marked as executable, to /usr/bin would help?

    Read the article

  • Why does my browser take me to Scour.com? (redirect virus)

    - by Paula DiTallo
    The "scour" or Rootkit.Win32.TDSS virus has a long history which can be found here: http://en.wikipedia.org/wiki/Scour Here is the primary symptom: after searching for something in your web browser using google, one of the results that you click on redirects you to scour.com. If you've executed ClamWin, Malwarebytes, McAfee, Norton, etc. to find and isolate the virus without any luck--this isn't really a surprise, since this virus attaches to existing system drivers. I only know of one reliable package that will remove this without ill effects--like adding new spyware. This package is called TDSSKiller. I have seen multiple websites that claim to have this software available, but the one that I know is reliable is located here: http://support.kaspersky.com/viruses/solutions?qid=208280684 Once you go to Kaspersky's tech support site, the TDSSKiller zip file is available for downloading. When you execute this software, you will be able to "cure" or repair the infected driver. Remember to jot down the name of the driver for future reference--should you need to reinstall the driver from a "same-as" working computer, or your install disk if the repair is ineffective. The driver that happened to get infected on my computer was the tcpip.sys driver. This caused my win sockets to loose their ip addresses. In most other instances, less critical drivers such as HDAudBus.sys are infected. In my case, I was not through correcting my computer problems until I corrected the broken WinSock issue and loaded an earlier version of the tcpip.sys driver from: C:\WINDOWS\ServicePackFiles\i386 which I placed in: C:\WINDOWS\system32\drivers Don't forget to reboot your computer after your repair! Once you download TDSSKiller and cure/repair your infected driver(s), the redirect on google searches should disappear .

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Configure Dual Boot, Windows 7 and Ubuntu 12.04 with or without EFI

    - by Keroak
    I have just installed Ubuntu 12.04 on a laptop with Windows 7 but I don't get to boot from Ubuntu. First, during the installation I made these partitions (may be too many): /dev/sda1 FAT32 SYSTEM 200Mb boot (EFI boot, i guess) /dev/sda2 unknown file system 128 Mb msftres (Windows Boot Manager) /dev/sda3 NTFS OS 100 Gb (Windows 7) /dev/sda4 NTFS DATOS 315 Gb (Data partition) /dev/sda5 ext4 28 Gb (/home) /dev/sda8 unknown file system 1 Gb biog_grub (i'm not very sure why i made this one) /dev/sda6 ext4 17 Gb (/ Ubuntu 12.03 installed withou errors aparently) /dev/sda7 linex-swap 2 GB (swap) I can boot from Windows perfectly. Actually I tried to configure Windows Boot Manager with EasyBCD but it doesn't recognize any boot entry. Anyway, I added an Ubuntu Entry and it configured it automatically. Now I have boot entries the Windows 7 one that appear to work and the Ubuntu 12.04 that it prompt a "No application found" message. I re-started from a USB with Ubuntu and tried to fix GRUB from the command-line and with boot-repair. No results. As far as I understand I have to tell the Windows Boot Manager where my Ubuntu boot loader is. So I have two problems: Actually, I don't know where my Ubuntu boot loader, GRUB or GRUB2 or whatever, is. I don't know how to set my Ubuntu entry in Windows Boot Manager. I guess using BCDedit.exe as EasyBCD didn't show me the entries. Anyway, I don't know what parameters to use. I read several articles about it but i didn't find out anything useful.

    Read the article

  • PC On/Off Time Charts Windows Uptime; No Logging Necessary

    - by Jason Fitzpatrick
    Windows: PC On/Off Time is a graphical tool that displays your PC’s uptime, downtime, errors, and more all in a clear and portable package. One of the hassles of using logging tools is that you usually have to enable the logging and then wait for results to pile up before seeing anything useful (such as when you turn on the logging on your router). PC On/Off Time taps right into the event logs your Windows PC is already keeping so you get immediate access to your uptime history. If you look at the screenshot above you can see an accurate picture of the last few weeks of uptime on my computer. October 23-24 I didn’t boot down my PC, the rest of the time I hibernated it overnight when I wasn’t using it, November 1st I installed an SSD (you can see the burst of reboots and short uptimes) and then November 9th there was a brief power outage that caused an unexpected stop (the red arrows on the timeline for the 9th). The free version offers a three-week peek back into your uptime history (upgrade to the Pro version for $12.75 or for free using Trial Pay to unlock your completely uptime history).PC On/Off Time is Windows only. PC On/Off Time [via Addictive Tips] Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • Bash History not containing all history and blank after reboot, how to resolve?

    - by TryTryAgain
    I've recently upgraded from 13.04 to 13.10 and realized my terminal bash history is not surviving reboots. cat ~/.bash_history gave me a permissions denied error. I, possibly unnecessarily or wrongly, issued a chmod 777 ~/.bash_history to see if that would help...and although I could then cat and read some contents it contained not much of anything as far as history. I also tried sudo rm ~/.bash_history after reading bash history not being preserved Strangely, after doing that, I typed a few test commands, ls, ls -lah ... and upon pressing the up arrow to go back through history it contained those two commands as well as the odd history from some far off time in the past but very few results and not the hundreds of commands I typed earlier in the day. Is there a new place bash history is stored? How can removing ~/.bash_history not get rid of the commands that are somehow lingering? I am not certain, but I believe my root bash history is acting normal. My user bash history is what's causing me trouble. Any help and guidance in tracking down and solving this problem is appreciated.

    Read the article

  • Optimizing lifestyle and training

    - by Gabe
    I am a college freshman who has recently discovered a passion for computer science. Having had my first lick of formal python training last semester, I have cast aside my previously hedonist way of life and tunneled my sights on becoming the most rounded and proficient programmer I can be. I know that I'm taking strides in the right direction (I've stopped smoking, I've been exercising every day, I've taught myself C++ and OpenGL, and I've begun training in kung-fu and meditation), yet I am still finding myself struggling to achieve satisfactory results. I would like to be able to spend a good 3-4 hours every day burning through textbooks. I have the time cleared and the resources allocated. The problem lies in the logistics-- I have never taken anything seriously before. Recently I've realized that I am clueless when it comes to taking care of myself and gaining control of my mind, and it drastically hinders my productivity. My question is this: How can I learn to manage my time and take care of myself such that I can spend the maximum amount of time every day studying with steady concentration? Personal tricks would be key here: techniques you use to get yourself to sleep, a diet that yields focus, even computer break stretching routines or active reading techniques. Anything you could think of here would be great. I was a low-life in high school and I have the drive to turn my life around, I'm just quite a bit behind in the way of good habits :)

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • Trouble installing Server 12.10 - Dead keyboard, Blank Screen, Network Config

    - by Mikey
    Installing 12.10 server from cd - minimal installation: basic system, ssh server,postgreSQL, manual updates. Hardware is brand new HP server that also runs Win 2003 Server Standard as a DNC excellently - I installed the grub boot manager on the primary partition and it is working fine - can boot to Win or Ubuntu without issue. Everything seemed to go OK on the installation - BUT when I restarted the system after install and booted to Ubuntu, I got the command prompt for Ubuntu, but the keyboard was UNREPSONSIVE - dead. There is nothing wrong with the keyboard - works fine if I boot to Win. With a completely unrepsonsive keyboard I had to hit the power switch - when I restarted and booted to Ubuntu, Ubuntu started but no command prompt came up at all - just black screen. I powered down and rebooted to advanced Ubuntu options - it tried to reinstall/initliaze a long list of packages - when it got to 'waiting for network configuration' it waited, then a message 'waiting 60 seconds for network configuration'... it waited 60 seconds and then I got a 'failed to configure network message' and it continued. Finally it finished, I hit enter and got to a prompt - but again, keyboard UNREPSONSIVE - dead. I went through this several times - tried 'repairing broken installation' option and also reinstalling entirely - always same results. I am flummoxed. The only clue I have is that for the Windows DNC config, the IP address is static - not via DHCP. But I don't think that should impact Ubuntu at all - perhaps I am mistaken. What is wrong?

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >