Search Results

Search found 19796 results on 792 pages for 'bit twiddler'.

Page 115/792 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • jQuery .html not writing span on page load, rights the span on ajax response

    - by pedalpete
    I've got a bit of a calendar I'm building with events that go beyond the boundary of one day. I've got a graph which shows how many events are running concurrently. In order to graph out the events which go into another day, I look for hours24, and run an extra bit of code. jQuery('li#hour'+placeGroup+'_'+schedDay2+'_'+thisHour,nextDay).data('count', numStaff).html(''+printHour+''+numEvents+' scheduled').css('height',numEvents*5+'px'); The code is working, the li items are being found, proper height is the css is being applied, data is set, which I've checked by running alert(jQuery('li#hour'+placeGroup+'_'+schedDay2+'_'+thisHour,nextDay).data('count')); right after the pervious bit of code. The only thing that isn't being done is the .html(add span) stuff. I run this code in two places. Once when the page loads, and once when an action is taken which changes the number of events via an ajax response. The li DOM object I'm writting to is created before the script is run, and the height is being applied, but not the span, and only on page load. Any ideas on this one? I'm stumped.

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Help me choose a web development framework/platform that will make me learn something

    - by Sergio Tapia
    I'm having a bit of an overload of information these past two days. I'm planning to start my own website that will allow local businesses to list their items on sale, and then users can come in and search for "Abercrombie t-shirt" and the stores that sell them will be listed. It's a neat little project I'm really excited for and I'm sure it'll take off, but I'm having problems from the get go. Sure I could use ASP.Net for it, I'm a bit familiar with it and the IDE for ASP.Net pages is bar-none, but I feel this is a great chance for me to learn something new to branch out a bit and not regurgitate .NET like a robot. I've been looking and asking around but it's all just noise and I can't make an educated decision. Can you help me choose a framework/platform that will make me learn something that's a nice thing to know in the job market, but also nice for me to grow as a professional? So far I've looked at: Ruby on Rails Kohana CakePHP CodeIgniter Symfony But they are all very esoteric to me, and I have trouble even finding out which IDE to use to that will let me use auto-complete for the proprietary keywords/methods. Thank you for your time.

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Passing Derived Class Instances as void* to Generic Callbacks in C++

    - by Matthew Iselin
    This is a bit of an involved problem, so I'll do the best I can to explain what's going on. If I miss something, please tell me so I can clarify. We have a callback system where on one side a module or application provides a "Service" and clients can perform actions with this Service (A very rudimentary IPC, basically). For future reference let's say we have some definitions like so: typedef int (*callback)(void*); // This is NOT in our code, but makes explaining easier. installCallback(string serviceName, callback cb); // Really handled by a proper management system sendMessage(string serviceName, void* arg); // arg = value to pass to callback This works fine for basic types such as structs or builtins. We have an MI structure a bit like this: Device <- Disk <- MyDiskProvider class Disk : public virtual Device class MyDiskProvider : public Disk The provider may be anything from a hardware driver to a bit of glue that handles disk images. The point is that classes inherit Disk. We have a "service" which is to be notified of all new Disks in the system, and this is where things unravel: void diskHandler(void *p) { Disk *pDisk = reinterpret_cast<Disk*>(p); // Uh oh! // Remainder is not important } SomeDiskProvider::initialise() { // Probe hardware, whatever... // Tell the disk system we're here! sendMessage("disk-handler", reinterpret_cast<void*>(this)); // Uh oh! } The problem is, SomeDiskProvider inherits Disk, but the callback handler can't receive that type (as the callback function pointer must be generic). Could RTTI and templates help here? Any suggestions would be greatly appreciated.

    Read the article

  • Jquery / PHP - pre-loading next page before navigating to it.

    - by Tim
    Hey all Using jQuery, is there a way to prevent going to the next page until the animation has finished AND the next page has completely loaded (including images)? The code below works but it is a bit clunky. You can see what I am trying to achieve here: http://bit.ly/aOeYgE The first three or so pages work. But when you click to go to the homepage a few times (after it's cached) you will see that it jumps, and the animation isn't very smooth. As you can see with the code below, the height is immediately set to 0, then when the page has loaded the height is set to 500px. When users navigate to a new page, the height should go back to 0, the next page content should load, and then upon loading the new window, the first bit of code will run again to set the width back to 500px. $(".content-center").css({"height": "0px"}); $(window).load(function() { if($('.content-center').is(':not(:animated)')) { $('.content-center').animate({height: "500px"}, 450); } }); $("a").click(function(event){ $(".content-center").animate({height: "0px"}, 500); if($('.content-center').is(':not(:animated)')) { navigate($(this).attr('href')); event.preventDefault(); } }); If anyone has any suggestions or alternative ideas to this code then it would be hugely appreciated. Many thanks Tim

    Read the article

  • jQuery Checkbox Error

    - by Zack Fernandes
    Hello, I am working on a jQuery-based todo list interface, and have hit a bit of a wall. The jQuery I am working with is a bit hacked together from various tutorials I have read, as I'm a bit of a beginner. $('#todo input:checkbox').click(function(){ var id = this.attr("value"); if(!$(this).is(":checked")) { alert("Starting."); $.ajax({ type: "GET", url: "/todos/check/"+id, success: function(){ alert("It worked.") } }); } }) This is the HTML I am using, <div id="todo"> <input type="checkbox" checked="yes" value="1"> Hello, world. <br /> </div> Any help on this would be greatly appreciated. For reference, thereason I have alerts in the jQuery is for debugging. The reason I can tell the code isn't working is because I am not getting these alerts. Thanks.

    Read the article

  • Debugged Program Window Won't Close

    - by Marc Bernier
    Hi, I'm using VS 2008 on a 64-bit XP machine. I'm debugging a 32-bit C++ DLL via a console program. The DLL and EXE projects are contained in the same SLN so that I can modify the DLL as I test. What happens is that every once in a while I kill the program with Debug | Stop Debugging (Shift-F5). VS stops the program, but the console window stays open! If I'm sitting at a breakpoint and hit Shift-F5, it will terminate properly, but if the program is running full-tilt when I stop it, I often see this instead. The big problem is that I can't close these zombie windows. Using End Task in Task Manager does nothing (no message, no nothing). When I shut down the machine, it is unable to due to the orphans and I have to resort to actually turning off the power. I think this is connected to having the DLL and EXE project in the same SLN, as for months I worked on this project in 2 VS instances, one for the DLL and the other for the EXE. I would continually jump back and forth between the windows as I worked. This problem never happened until I put the two projects into a single SLN. The single SLN works a lot better, but this anomaly is very irritating. Any ideas anyone? UPDATE After a bit of searching (here), I found that it appears to have to do with one of the updates from last Tuesday (KB977165 or KB978037). Thank you Microsoft for your excellent pre-release testing.

    Read the article

  • How would I code a complex formula parser manually?

    - by StormianRootSolver
    Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway". What I want to accomplish ist: a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me) c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser... e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack So, what do you think?

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • Algorithm for assigning a unique series of bits for each user?

    - by Mark
    The problem seems simple at first: just assign an id and represent that in binary. The issue arises because the user is capable of changing as many 0 bits to a 1 bit. To clarify, the hash could go from 0011 to 0111 or 1111 but never 1010. Each bit has an equal chance of being changed and is independent of other changes. What would you have to store in order to go from hash - user assuming a low percentage of bit tampering by the user? I also assume failure in some cases so the correct solution should have an acceptable error rate. I would an estimate the maximum number of bits tampered with would be about 30% of the total set. I guess the acceptable error rate would depend on the number of hashes needed and the number of bits being set per hash. I'm worried with enough manipulation the id can not be reconstructed from the hash. The question I am asking I guess is what safe guards or unique positioning systems can I use to ensure this happens.

    Read the article

  • x509 certificate Information

    - by sid
    Certificate: Data: Version: 3 (0x2) Serial Number: 95 (0x5f) Signature Algorithm: sha1WithRSAEncryption Issuer: C=, O=, CN= Validity Not Before: Apr 22 16:42:11 2008 GMT Not After : Apr 22 16:42:11 2009 GMT Subject: C=, O=, CN=, L=, ST= Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): ... ... ... Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: critical Code Signing X509v3 Authority Key Identifier: keyid: ... Signature Algorithm: sha1WithRSAEncryption a9:55:56:9b:9e:60:7a:57:fd:7:6b:1e:c0:79:1c:50:62:8f: ... ... -----BEGIN CERTIFICATE----- ... ... ... -----END CERTIFICATE----- In This Certificate, Which is the public key? is Modulus? what does the Signature Algorithm, a9:55:56:... represent (is it message digest)? And what is between -----BEGIN CERTIFICATE----- & -----END CERTIFICATE-----, is That the whole certificate? As I am novice, little bit confusing between the message digest and public key? Thanks in Advance-opensid

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Can someone confirm how Microsoft Excel 2007 internally represents numbers?

    - by Jon
    I know the IEEE 754 floating point standard by heart as I had to learn it for an exam. I know exactly how floating point numbers are used and the problems that they can have. I can manually do any operation on the binary representation of floating point numbers. However, I have not found a single source which unambiguously states that excel uses 64 bit floating point numbers to internally represent every single cell "type" in excel except for text. I have no idea whether some of the types use signed or unsigned integers and some use 64 bit floating point. I have found literally trillions of articles which 1) describe floating point numbers and then 2) talk about being careful with excel because of floating point numbers. I have not found a single statement saying "all types are 64 bit floating point numbers except text". I have not found a single statement which says "changing the type of a cell only changes its visual representation and not its internal representation, unless you change the type from text to some other type which is not text or you change some other type which is not text to text". This is literally all I want to know, and it's so simple and axiomatic that I am amazed that I can find trillions of articles and pages which talk around these statements but do not state them directly.

    Read the article

  • Design by contracts and constructors

    - by devoured elysium
    I am implementing my own ArrayList for school purposes, but to spice up things a bit I'm trying to use C# 4.0 Code Contracts. All was fine until I needed to add Contracts to the constructors. Should I add Contract.Ensures() in the empty parameter constructor? public ArrayList(int capacity) { Contract.Requires(capacity > 0); Contract.Ensures(Size == capacity); _array = new T[capacity]; } public ArrayList() : this(32) { Contract.Ensures(Size == 32); } I'd say yes, each method should have a well defined contract. On the other hand, why put it if it's just delegating work to the "main" constructor? Logicwise, I wouldn't need to. The only point I see where it'd be useful to explicitly define the contract in both constructors is if in the future we have Intelisense support for contracts. Would that happen, it'd be useful to be explicit about which contracts each method has, as that'd appear in Intelisense. Also, are there any books around that go a bit deeper on the principles and usage of Design by Contracts? One thing is having knowledge of the syntax of how to use Contracts in a language (C#, in this case), other is knowing how and when to use it. I read several tutorials and Jon Skeet's C# in Depth article about it, but I'd like to go a bit deeper if possible. Thanks

    Read the article

  • Mulltiple configurations in Qt

    - by user360607
    Hi all! I'm new to Qt Creator and I have several questions regarding multiple build configurations. A side note: I have the QtCreator 1.3.1 installed on my Linux machine. I need to have two configurations in my Qt Creator project. The thing is that these aren't simply debug and release but are based on the target architecture - x86 or x64. I came across http://stackoverflow.com/questions/2259192/building-multiple-targets-in-qt-qmake and from that I went trying something like: Conf_x86 { TARGET = MyApp_x86 } Conf_x64 { TARGET = MyApp_x64 } This way however I don't seems to be able to use the Qt Creator IDE to build each of these separately (Build All, Rebuild All, etc. options from the IDE menu). Is there a way to achieve this - may be even show Conf_x86 and Conf_x64 as new build configurations in Qt Creator? One other thing the Qt I have is 64 bit so by default the target built using Qt Creator IDE will also be 64 bit. I noticed that the effective qmake call in the build step includes the following option '-spec linux-g++-64'. I also noticed that should I add '-spec linux-g++-32' in 'Additional arguments' it would override '-spec linux-g++-64' and the resulting target will be 32 bit. How can I achieve this by simply editing the contents of the .pro file? I saw that all these changes are initially saved in the .pro.user file but does doesn't suit me at all. I need to be able to make these configurations from the .pro file if possible. Any help will be appreciated. 10x in advance!

    Read the article

  • .ico icons not showing up on Windows

    - by Ali
    I followed the The Qt Resource System guide and the .ico icons appear on Linux. The icons are not showing up on Windows when I try to run the applicaton from Qt Creator. I suspect a plugin issue based on Qt/C++: Icons not showing up when program is run under windows O.S but I failed to figure out what to do from the guide How to Create Qt Plugins. Is it a plugin issue or why aren't the icons showing up on Windows? If it is a plugin issue: How do I tell my applicaton where to find the qico.dll? Details of the environment: Works on: Kubuntu 12.04 LTS, Qt Creator 2.4.1 and Qt 4.7.4 (64 bit) Fails on: Windows XP SP2 32 bit, Qt Creator 2.4.1 and Qt 4.7.4 (32 bit) Everyting is at its default (as installed out of the box), I did not mess with the settings. resources.qrc <!DOCTYPE RCC><RCC version="1.0"> <qresource> <file>images/spreadsheet.ico</file> </qresource> </RCC> Also tried with <qresource prefix="/">. From the applicaton.pro RESOURCES += \ resources.qrc OTHER_FILES += \ images/spreadsheet.ico In the corresponding source file QIcon(":/images/spreadsheet.ico") I repeat: it works on Linux.

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • When is Googling it wrong?

    - by Drahcir
    I've been going through Stack Overflow for quite a bit now and noticed certain people (usually experienced programmers) frown upon Googling (researching) certain problems. Since I myself tend to use Google quite a bit to solve certain programming related issues I found certain comments rather demoralising. Now some of you may have come here trigger happy to delete this post but I needed some clarification. I usually Google things that usually syntax related that I would have never figured out on my own. For example I once wondered how to access the properties of a class that I didn't have a direct relationship to. So after a bit of research I discovered reflection and got what I wanted. Now in another scenario is learning a new language, in my case Silverlight were it differs in certain aspects of .NET compared to say ASP.NET. A few weeks ago I had no idea how to load another Silverlight page (usercontrol) and had to Google my way to the solution which I found wasn't as simple as I had imagined. In scenario three is were I myself frown up, that is just stealing a huge chunk of code to avoid doing the work yourself, for example paging a HTML table using JavaScript, where one just copies and pastes the JavasSript code without as much as trying to understand how it works. I do admit I have done this once or twice before for trivial tasks that had very little time limit and weren't all that important but most of the time still have to throw away what I found because it took too much time to adapt it and get what I wanted out of it. In the last scenario, I sometimes have a piece of code that I would be really unhappy about, as in I find it sloppy or too overcomplicated and try to look on the Internet to see other ways to tackle the same problem, let's say filtering through a table. With the knowledge I acquire I learned new coding practices that help me work more efficiently like "Do not repeat yourself" and such. Now in your opinion when do you find it wrong to use Google (or any other researching tool) to find a solution to your problem?

    Read the article

  • sql server - framework 4 - IIS 7 weird sort from db to page

    - by ila
    I am experiencing a strange behavior when reading a resultset from database in a calling method. The sort of the rows is different from what the database should return. My farm: - database server: sql server 2008 on a WinServer 2008 64 bit - web server: a couple of load balanced WinServer 2008 64 bit running IIS 7 The application runs on a v4.0 app pool, set to enable 32bit applications Here's a description of the problem: - a stored procedure is called, that returns a resultset sorted on a particular column - I can see thru profiler the call to the SP, if I run the statement I see correct sorting - the calling page gets the results, and before any further elaboration logs the rows immediately after the SP execution - the results are in a completely different order (I cannot even understand if they are sorted in any way) Some details on the Stored Procedure: - it is called by code using a SqlDatAdapter - it has also an output value (a count of the rows) that is read correctly - which sort field is to be used is passed as a parameter - makes use of temp tables to collect data and perform the desired sort Any idea on what I could check? Same code and same database work correctly in a test environment, 32 bit and not load balanced.

    Read the article

  • How do I get rid of this "(" using regex?

    - by Solignis
    Hi there, I was moving along on a regex expression and I have hit a road block I can't seem to get around. I am trying to get rid of "(" in the middle of a line of text using regex, there were 2 but I figured out how to get the one on the end of the line. its the one in the middle I can hack out. Here is the snippet I am searching for in the config file. I put 2 examples. guestOSAltName = "Ubuntu Linux (64-bit)" guestOSAltName = "Microsoft Windows 2000 Professional" Here is the snippet I am working on. if ($vmx_file =~ m/^\bguestOSAltName\b\s+\S\s+\W(?<GUEST_OS> .+[^")])\W/xm) { $virtual_machines{$vm}{"OS"} = "$+{GUEST_OS}"; } else { $virtual_machines{$vm}{"OS"} = "N/A"; } I am thinking the problem is I cannot make a match to "(" because the expression before that is to ".+" so that it matches everything in the line of text, be it alphanumeric or whitespace or even symbols like hypens. Any ideas how I can get this to work? This is what I am getting for an output from a hash dump. $VAR1 = { 'NS02' => { 'ID' => '144', 'Version' => '7', 'OS' => 'Ubuntu Linux (64-bit', 'VMX' => '/vmfs/volumes/datastore2/NS02/NS02.vmx', 'Architecture' => '64-bit' },

    Read the article

  • c# STILL returning wrong number of cores

    - by Justin
    Ok, so I posted in In C# GetEnvironmentVariable("NUMBER_OF_PROCESSORS") returns the wrong number asking about how to get the correct number of cores in C#. Some helpful people directed me to a couple of questions where similar questions were asked but I have already tried those solutions. My question was then closed as being the same as another question, which is true, it is, but the solution given there didn't work. So I'm opening another one hoping that someone may be able to help realising that the other solutions DID NOT work. That question was How to find the Number of CPU Cores via .NET/C#? which used WMI to try to get the correct number of cores. Well, here's the output from the code given there: Number Of Cores: 32 Number Of Logical Processors: 32 Number Of Physical Processors: 4 As per my last question, the machine is a 64 core AMD Opteron 6276 (4x16 cores) running Windows Server 2008 R2 HPC edition. Regardless of what I do Windows always seems to return 32 cores even though 64 are available. I have confirmed the machine is only using 32 and if I hardcode 64 cores, then the machine uses all of them. I'm wondering if there might be an issue with the way the AMD CPUs are detected. FYI, in case you haven't read the last question, if I type echo %NUMBER_OF_PROCESSORS" at the command line, it returns 64. It just won't do it in a programming environment. Thanks, Justin UPDATE: Outputting PROCESSOR_ARCHITECTURE returns AMD64 from the command line, but x86 from the program. The program is 32-bit running on 64-bit hardware. I was asked to compile it to 64-bit but it still shows 32 cores.

    Read the article

  • Java for loop with multiple incrementers

    - by user2517280
    Im writing a program which combines the RGB pixel values for 3 images, e.g. red pixel of image 1, green pixel of image 2 and blue pixel of image 3 and I want to then create a final image of it. Im using the code below, but this seems to be incrementing x2 and x3 whilst x1 is the same, i.e. not giving the right pixel value for same co-ordinate for each image. for (int x = 0; x < image.getWidth(); x++) { for (int x2 = 0; x2 < image2.getWidth(); x2++) { for (int x3 = 0; x3 < image3.getWidth(); x3++) { for (int y = 0; y < image.getHeight(); y++) { for (int y2 = 0; y2 < image2.getHeight(); y2++) { for (int y3 = 0; y3 < image3.getHeight(); y3++) { So I was wondering if anyone can tell me how to iterate through each of the 3 images on the same co-ordinate, so for example read 1, 1 of each image and record the red, green and blue value accordingly. Apologies if it doesnt make complete sense, its a bit hard to explain. I can iterate the values for one image fine but when I add in another, things start to go a bit wrong as obviously its quite a bit more complicated! I was thinking it might be easier to create an array and replace the according values in that just not sure how to do that effectively either. Thanks

    Read the article

  • Data conversion from accelerometer

    - by mrigendra
    Hi all I am working on an accelerometer bma220 , and its datasheet says that data is in 2's complement form.So what i had to do was getting that 8 bit data in any 8 bit signed char and done. the bma220 have an 8 bit register of which first 6 bits are data and last two are zero. void properdata(int16_t *msgData) { printf("\nin proper data\n"); int16_t temp, i; for(i=0; i<3; i++) { temp = *(msgData + i); printf("temp = %d sense = %d\n", temp, sense); temp = temp >> 2; // only 6 bits data temp = temp / sense; //decimal value * .0625 = value in g printf("temp = %d\n", temp); } } in this program i am taking data in a unsigned variable msgdata and doing all the calculations on a signed variable. I just need to know if this is the correct way to convert data?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >