Search Results

Search found 3639 results on 146 pages for 'amd processor'.

Page 111/146 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • AudioQueue recording as float

    - by niklassaers
    Hi guys, I would like to have the result from my recording as a float in the range [0.0, 1.0], alternatively [-1.0, 1.0] because of a bit of math I want to do on it. When I set my recordingformat to be in float, like this: mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat; I get: Error: AudioQueueNewInput failed ('fmt?') Does this mean the hardware doesn't support recording to floats? If not, how do I set it to record in floats? If so, are there any processor-friendly ways I can convert a signed integer array to a float array? Cheers Nik

    Read the article

  • What useful GDB scripts have you used/written?

    - by nik
    People use gdb on and off for debugging, of course there are lots of other debugging tools across the varied OSes, with and without GUI and, maybe other fancy IDE features. I would like to know what useful gdb scripts you have written and liked. While, I do not mean a dump of commands in a something.gdb file that you source to pull out a bunch of data, if that made your day, go ahead and talk about it. Lets think conditional processing, control loops and functions written for more elegant and refined programming to debug and, maybe even for whitebox testing Things get interesting when you start debugging remote systems (say, over a serial/ethernet interface) And, what if the target is a multi-processor (and, multithreaded) system Let me put a simple case as an example... Say, A script that traversed serially over entries to locate a bad entry in a large hash-table that is implemented on an embedded platform. That helped me debug a broken hash-table once.

    Read the article

  • Accessing a Service from within an XNA Content Pipeline Extension

    - by David Wallace
    I need to allow my content pipeline extension to use a pattern similar to a factory. I start with a dictionary type: public delegate T Mapper<T>(MapFactory<T> mf, XElement d); public class MapFactory<T> { Dictionary<string, Mapper<T>> map = new Dictionary<string, Mapper<T>>(); public void Add(string s, Mapper<T> m) { map.Add(s, m); } public T Get(XElement xe) { if (xe == null) throw new ArgumentNullException( "Invalid document"); var key = xe.Name.ToString(); if (!map.ContainsKey(key)) throw new ArgumentException( key + " is not a valid key."); return map[key](this, xe); } public IEnumerable<T> GetAll(XElement xe) { if (xe == null) throw new ArgumentNullException( "Invalid document"); foreach (var e in xe.Elements()) { var val = e.Name.ToString(); if (map.ContainsKey(val)) yield return map[val](this, e); } } } Here is one type of object I want to store: public partial class TestContent { // Test type public string title; // Once test if true public bool once; // Parameters public Dictionary<string, object> args; public TestContent() { title = string.Empty; args = new Dictionary<string, object>(); } public TestContent(XElement xe) { title = xe.Name.ToString(); args = new Dictionary<string, object>(); xe.ParseAttribute("once", once); } } XElement.ParseAttribute is an extension method that works as one might expect. It returns a boolean that is true if successful. The issue is that I have many different types of tests, each of which populates the object in a way unique to the specific test. The element name is the key to MapFactory's dictionary. This type of test, while atypical, illustrates my problem. public class LogicTest : TestBase { string opkey; List<TestBase> items; public override bool Test(BehaviorArgs args) { if (items == null) return false; if (items.Count == 0) return false; bool result = items[0].Test(args); for (int i = 1; i < items.Count; i++) { bool other = items[i].Test(args); switch (opkey) { case "And": result &= other; if (!result) return false; break; case "Or": result |= other; if (result) return true; break; case "Xor": result ^= other; break; case "Nand": result = !(result & other); break; case "Nor": result = !(result | other); break; default: result = false; break; } } return result; } public static TestContent Build(MapFactory<TestContent> mf, XElement xe) { var result = new TestContent(xe); string key = "Or"; xe.GetAttribute("op", key); result.args.Add("key", key); var names = mf.GetAll(xe).ToList(); if (names.Count() < 2) throw new ArgumentException( "LogicTest requires at least two entries."); result.args.Add("items", names); return result; } } My actual code is more involved as the factory has two dictionaries, one that turns an XElement into a content type to write and another used by the reader to create the actual game objects. I need to build these factories in code because they map strings to delegates. I have a service that contains several of these factories. The mission is to make these factory classes available to a content processor. Neither the processor itself nor the context it uses as a parameter have any known hooks to attach an IServiceProvider or equivalent. Any ideas?

    Read the article

  • Cannot set up dual monitors correctly in Fedora15 with KDE.

    - by adivasile
    I have 2 monitors: 24" LCD connected via DVI(primary) 19" LCD connected via VGA(secondary) Everytime Fedora starts the second display is always set to clone the first one and they both run at 1280x1024 and I always have to disable the 19" monitor, in order for the bigger one to run at 1920x1080. I want to set them up so that my secondary monitor extends the primary one.The problem is that no matter what kind of configuration I choose it has no effect.My secondary monitor remains disabled. I've tried using both the Display manager from KDE and the ATI Control Panel and the behaviour is always the same.The moment I click apply, the screen flickers and nothing changes. I've succesfully used the extended setup in Fedora15 with Gnome3. I have a RadeonHD 4300 series videocard and I'm using the drivers downloaded from the AMD site. This is the output of xrandr -q : Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 1920 x 1920 VGA-0 connected (normal left inverted right x axis y axis) 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.0 70.1 66.0 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 75.0 72.8 66.7 59.9 720x400 70.1 DVI-0 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0*+ 60.0 1680x1050 59.9 1600x900 60.0 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1280x720 60.0 1152x720 60.0 1024x768 75.0 60.0 832x624 74.6 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 Later edit: The problem seems to come from the ATI drivers.I managed to set up the monitors like I wanted after I uninstalled the drivers. Unfortunately I'm working on an OpenCL project so I had to reinstall them.The moment I did that, all my previous settings were forgotten and I was back to square one.

    Read the article

  • Idle state detection for server

    - by odinmillion
    Windows OS has a service that detects idle state. Details: Task Idle Conditions The computer is considered idle if all the processors and all the disks were idle for more than 90% of the past 15 minutes and if there is no keyboard or mouse input during this period of time. When the Task Scheduler service detects that the computer is idle, the service only waits for user input to mark the end of the idle state. It is very useful for usual PCs that have keyboard amd mouse. We can use standard task scheduler to start some process like defrag when PC in idle state and stop when PC isn't in idle state. But what should we use when we using a standalone server without keyboard and mouse? Server sometimes receives commands by TCP/IP and starts CPU and HDD activity. But sometimes CPU and HDD activity at zero level. I would like to use this periods of time to start defrag or another process. But this started at "idle" state processes should be terminated when another commands will appear. So, standard idle state conditions cant help me because we have not got user input to stop idle state. I need more customizable idle state detector. Automatically started processes shouldn't influence to idle state, but PC should go away from idle state when another process will apperar. What should I use? Maybe exists some advansed task scheduler? Or I should write some useful utility on C#? I hope that it is a standard task and all useful utilities already compiled :)

    Read the article

  • cups log kills ubuntu 12.04 and sudoer permissions changed

    - by peterretief
    I am using Ubuntu 12.04 as a desktop and recently had a weird crash with the log file for cups filling up the entire drive and not letting me back in, also what changed was /var/lib/sudo had changed from root to peter (me) I didn't make this change - I checked the history! I set the sudoers back to root and capped the max size for cups log Anyone had a similar experience? It feels like someone is messing around with my settings Is there any way to trace how the error occurred? Logs auth.log Jan 1 02:04:13 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:04:13 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Jan 1 02:06:53 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:06:53 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 syslog Jan 1 02:04:13 peter-desktop rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="903" x-info="http://www.rsyslog.com"] start Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's groupid changed to 103 Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's userid changed to 101 Jan 1 02:04:13 peter-desktop rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] Jan 1 02:04:13 peter-desktop bluetoothd[898]: Failed to init gatt_example plugin Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Linux version 3.2.0-25-generic-pae (buildd@palmer) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #40-Ubuntu SMP Wed May 23 22:11:24 UTC 2012 (Ubuntu 3.2.0-25.40-generic-pae 3.2.18) Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] KERNEL supported cpus: Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Intel GenuineIntel Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] AMD AuthenticAMD Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] NSC Geode by NSC

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • fast algorithm to sort very small set

    - by aaa
    hello. This is the problem I ran into long time ago. I thought I may ask your for your ideas. assume I have very small set of numbers (integers), 4 or 8 elements, that need to be sorted, fast. what would be the best approach/algorithm? my approach was to use the max/min functions only. At this point it becomes somewhat hardware dependent , so let us assume Intel 64-bit processor with SSE3 . Thanks

    Read the article

  • Cast to generic type in C#

    - by Andrej
    I have a Dictionary to map a certain type to a certain generic object for that type. For example: typeof(LoginMessage) maps to MessageProcessor<LoginMessage> Now the problem is to retrieve this generic object at runtime from the Dictionary. Or to be more specific: To cast the retrieved object to the specific generic type. I need it to work something like this: Type key = message.GetType(); MessageProcessor<key> processor = messageProcessors[key] as MessageProcessor<key>; Hope there is a easy solution to this. Edit: I do not want to use Ifs and switches. Due to performance issues I cannot use reflection of some sort either.

    Read the article

  • CPU consumption of my process

    - by Abruzzo Forte e Gentile
    Hi all I would like to use Performance Monitor to check the CPU consumption of my process. Right now I am working on a MultiCore machine. If I have a look at my process in TASK MANAGER I see that my process consumes 20% of CPU. If I start performance monitor, I select Process--% Processor Time I see values peaking up and over 100%. Do you know why and how to get the real measure? I also looked at the CPU consumption for all of my 4 cores, but I don't know exactly how to attribute consumption to my process. If you can suggest a link or url about how to read CPU usage I would really appreciate! Thanks a lot! AFG

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • Is Updating double operation atomic

    - by Yan Cheng CHEOK
    In Java, updating double and long variable may not be atomic, as double/long are being treated as two separate 32 bits variables. http://java.sun.com/docs/books/jls/second_edition/html/memory.doc.html#28733 In C++, if I am using 32 bit Intel Processor + Microsoft Visual C++ compiler, is updating double (8 byte) operation atomic? I cannot find much specification mention on this behavior. When I say "atomic variable", here is what I mean : Thread A trying to write 1 to variable x. Thread B trying to write 2 to variable x. We shall get value 1 or 2 out from variable x, but not an undefined value.

    Read the article

  • Cannot access host from a virtualbox guest using bridged adapter

    - by David Dai
    I have a windows 7 host with firewall turned off. And I have a windowsXP guest running on Virtualbox 4.2.4r81684. In my windowsXP guest I tried to connect to the FTP server on my host machine(which used to work well) but it didn't work. I tried to ping my host machine, but it didn't work either. Then I tried to ping my guest from host, it worked well. my guest ip is :192.168.1.95 my host ip is : 192.168.1.9 route table on guest machine is this: C:\Documents and Settings\wenlong>route PRINT =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...08 00 27 66 54 6c ...... AMD PCNET Family PCI Ethernet Adapter #2 - Packe t Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.95 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.1.0 255.255.255.0 192.168.1.95 192.168.1.95 20 192.168.1.95 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.1.255 255.255.255.255 192.168.1.95 192.168.1.95 20 224.0.0.0 240.0.0.0 192.168.1.95 192.168.1.95 20 255.255.255.255 255.255.255.255 192.168.1.95 192.168.1.95 1 Default Gateway: 192.168.1.1 =========================================================================== Persistent Routes: None arp cache is this: C:\Documents and Settings\wenlong>arp -a Interface: 192.168.1.95 --- 0x2 Internet Address Physical Address Type 192.168.1.1 00-26-f2-60-3c-04 dynamic 192.168.1.9 90-e6-ba-c2-90-2f dynamic It's strange because there was no problem days before and I didn't make any changes to the setting. could anybody help? PS. the guest can communicate with other machines in the LAN(for example 192.168.1.114) ok. it just cannot connect to the host machine.

    Read the article

  • Picking a linux compatible motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Programatically detect number of physical processors/cores or if hyper-threading is active on Window

    - by HTASSCPP
    I have a multithreaded c++ application that runs on Windows, Mac and a few Linux flavours. To make a long story short: Inorder for it to run at maximum efficiency I have to be able to instantiate a single thread per physical processor/core. Creating more threads than there are physical processors/cores degrades the performance of my program considerably. I can already correctly detect the number of logical processors/cores correctly on all three of these platforms. To be able to detect the number of physical processors/cores correctly I'll have to detect if hyper-treading is supported AND active. My question therefore is if there is a way to detect whether hyperthreading is supported AND ENABLED? If so, how exactly.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • C++ project type: unicode vs multi-byte; pros and cons

    - by Stefan Valianu
    I'm wondering what the Stack Overflow community thinks when it comes to creating a project (thinking primarily c++ here) with a unicode or a multi-byte character set. Are there pros to going Unicode straight from the start, implying all your strings will be in wide format? Are there performance issues / larger memory requirements because of a standard use of a larger character? Is there an advantage to this method? Do some processor architectures handle wide characters better? Are there any reasons to make your project Unicode if you don't plan on supporting additional languages? What reasons would one have for creating a project with a multi-byte character set? How do all of the factors above collide in a high performance environment (such as a modern video game) ?

    Read the article

  • Blender refuses to start

    - by Sekhemty
    I'm trying to run Blender under Linux, but I'm unable to do that, whenever I try I get some errors. I'm using Kubuntu 12.04 with KDE 4.11.1. This is my video card: ~$ lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV610/M74 [Mobility Radeon HD 2400 XT] I used to have installed the fglrx proprietary Catalyst drivers, but lately they gave me some system-wide problems and I had to revert to the open source Mesa drivers (I don't think that these details are important, but just in case, the whole story is here). Whit the fglrx drivers Blender was running fine, but now, whenever I try to start it, I get this error message (some parts are in italian, but I think that they are easily understandable): ~$ blender connect failed: No such file or directory Writing: /tmp/blender.crash.txt Errore di segmentazione (core dump creato) The content of /tmp/blender.crash.txt is as follows: # Blender 2.68 (sub 5), Revision: 60150 # backtrace /usr/lib/blender/blender() [0x877a41f] [0xb7756400] /usr/lib/i386-linux-gnu/libLLVM-3.0.so.1(_ZN4llvm3ARM8SPRClassC1Ev+0x15) [0xa8f4a9d5] /usr/lib/i386-linux-gnu/libLLVM-3.0.so.1(+0x25ca48) [0xa8eefa48] /lib/ld-linux.so.2(+0xeeab) [0xb7765eab] /lib/ld-linux.so.2(+0xef94) [0xb7765f94] /lib/ld-linux.so.2(+0x12fa6) [0xb7769fa6] /lib/ld-linux.so.2(+0xeccf) [0xb7765ccf] /lib/ld-linux.so.2(+0x127f4) [0xb77697f4] /lib/i386-linux-gnu/libdl.so.2(+0xbe9) [0xb4ff9be9] /lib/ld-linux.so.2(+0xeccf) [0xb7765ccf] /lib/i386-linux-gnu/libdl.so.2(+0x133a) [0xb4ffa33a] /lib/i386-linux-gnu/libdl.so.2(dlopen+0x47) [0xb4ff9c97] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x3cbf0) [0xb7717bf0] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x4079d) [0xb771b79d] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x1a3aa) [0xb76f53aa] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(glXQueryVersion+0x2e) [0xb76f0cee] /usr/lib/blender/blender(_ZN15GHOST_WindowX11C1EP15GHOST_SystemX11P9_XDisplayRK10STR_Stringiijj18GHOST_TWindowStatei25GHOST_TDrawingContextTypebbt+0x11c) [0x8f54aec] /usr/lib/blender/blender(_ZN15GHOST_SystemX1112createWindowERK10STR_Stringiijj18GHOST_TWindowState25GHOST_TDrawingContextTypebbti+0xd7) [0x8f4f4a7] /usr/lib/blender/blender(GHOST_CreateWindow+0xb6) [0x8f4cf86] /usr/lib/blender/blender(wm_window_add_ghostwindows+0x205) [0x8799be5] /usr/lib/blender/blender(WM_check+0x50) [0x877b670] /usr/lib/blender/blender(wm_homefile_read+0x111) [0x87859f1] /usr/lib/blender/blender(WM_init+0xd2) [0x8787872] /usr/lib/blender/blender(main+0xe6e) [0x873848e] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0xb4e694d3] /usr/lib/blender/blender() [0x8778a99] The only thing that I can guess from this report is that the mesa drivers are somewhat involved, as I already suspected, but I don't have a clue on what I need to do to try to solve the issue.

    Read the article

  • ASP.NET Web App to compare performance on different hardware?

    - by Guy
    I'm looking for an open source C# ASP.NET Web App that can be loaded onto 2 or more dedicated servers and provide me with metrics on how that server is performing. E.g. Click on a page and the app does a number of in-memory iterations and/or calculations to test processor throughput. Another page would do a bunch of disk access and report on that. I could put one together myself but there might already be something out there with a whole ton of tools in it to do this. I would imagine that I'm not the first one that would want to compare two machines for use as a web server.

    Read the article

  • How to pass a string containing both single and double quotes as a parameter to XSLT in PHP?

    - by Boaz
    Hi, I have a simple PHP-based XSLT trasform code that looks like that: $xsl = new XSLTProcessor(); $xsl->registerPHPFunctions(); $xsl->setParameter("","searchterms", $searchterms); $xsl->importStylesheet($xslDoc); echo $xsl->transformToXML($doc); The code passes the variable $searchterms, which contains a string, as a parameter to the XSLT style sheet which in turns uses it as a text: <title>search feed for <xsl:value-of select="$searchterms"/></title> This works fine until you try to pass a string with mixes in it, say: $searchterms = '"some"'." text's quotes are mixed." In that point the XSLT processor screams: Cannot create XPath expression (string contains both quote and double-quotes) What is the correct way to safely pass arbitrary strings as input to XSLT? Note that these strings will be used as a text value in the resulting XML and not as an XPATH paramater. Thanks, Boaz

    Read the article

  • Microsecond (or one ms) time resolution on an embedded device (Linux Kernel)

    - by ChrisDiRulli
    Hey guys, I have a kernel module I've built that requires at least 1 ms time resolution. I currently use do_gettimeofday() but I'm concerned that this won't work once I move my module to an embedded device. The device has a 180 Mz processor (MIPS) and the default HZ value in the kernel is 100. Thus using jiffies will only give me at best 10 ms resolution. That won't cut it. What I'd like to know is if do_gettimeofday() is based on the timer interrupt (HZ). Can it be guaranteed to provide at least 1 ms of resolution? Thanks!

    Read the article

  • Blue screen issue

    - by Jack
    I received several BSOD's that are recorded in the following logs: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 3081 Additional information about the problem: BCCode: 50 BCP1: FFFFF95FF8150C10 BCP2: 0000000000000008 BCP3: FFFFF95FF8150C10 BCP4: 0000000000000005 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\040412-20030-01.dmp C:\Users\Jack\AppData\Local\Temp\WER-33025-0.sysdata.xml ~~~~~ Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 3081 Additional information about the problem: BCCode: 1e BCP1: 0000000000000000 BCP2: 0000000000000000 BCP3: 0000000000000000 BCP4: 0000000000000000 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\040412-32729-01.dmp C:\Users\Jack\AppData\Local\Temp\WER-64319-0.sysdata.xml It seems to occur at random. I have gone 2 months without a BSOD, then I have gone a week with 10+ without changing what I am doing. This is my system: Windows 7 Professional 64-bit Gigabyte GA-890GPA-UD3H AMD Phenom II x6 1090T Processor 3.2GHz 8GB Ram(4X 2GB) Radeon HD 7850 2TB HDD Thermaltake 500W PSU I'm not sure about what the BSOD says, it just counts to 100 by 5's then restarts the computer. It happens fast and I have tried to get a picture before but to no avail.

    Read the article

  • Why GPRS modem provides embedded TCP/IP stack

    - by Christian Madsen
    My colleague and I are mining the GPRS MODEM market for a module suitable for use with embedded Linux. During the market scan, we see that several vendors highlight that their MODEMs include an embedded TCP/IP stack. This makes me wonder: when we are using embedded Linux which already contains a TCP/IP stack and connects using PPP, will it make use of the stack included in the GPRS MODEM at all? My current assumption is that the stack is included for use with tiny microcontroller OS that do not supply their own stack. Also some of the MODEMs allow for running small applications IN the MODEM baseband processor which could explain the embedded stack... So: is the TCP/IP stack supplied by the GPRS MODEM superfluous when using it with an HL OS or did I overlook something?

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • Better to build or buy a compute grid platform?

    - by James B
    I am looking to do some quite processor-intensive brute force processing for string matching. I have run my prototype in a multi-threaded environment and compared the performance to an implementation using Gridgain with a couple of nodes (also multithreaded). The performance I observed was that my Gridgain implementation performed slower to my multithreaded implementation. It could be the case that there was a flaw in my gridgain implementation, but it was only a prototype, and I thought the results were indicative. So my question is this: What are the advantages of having to learn and then build an implementation for a particular grid platform (hadoop, gridgain, or EC2 if going hosted - other suggestions welcome), when one could fairly easily put together a lightweight compute grid platform with a much shallower learning curve?...i.e. what do we get for free with these cloud/grid platforms that are worth having/tricky to implement? (Please note, I don't have any need for a data grid) Cheers, -James (p.s. Happy to make this community wiki if needbe)

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >