Search Results

Search found 7638 results on 306 pages for 'enterprise peformance man'.

Page 260/306 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Allocating More Than 4 GB Of Memory

    - by TPatti
    I am facing an issue with memory allocation. I have: Host OS: Microsoft Windows XP - Professional x64 Edition - Version 2003 - Service Pack 2. Host Physical Memory: 8 GB Guest OS: Red Hat Enterprise Linux WS release 4 (Nahant Update 5). I am not sure if it is 32 or 64 bits. The lsb_release -a command says that argument LSB Version: core-3.0-ia32, so I guess that would be 32 bits... VMware Player Version: 2.5.2 build-156735 I would like that VMware Player could allocate more that 4 GB, but when I go to the setting, it only lists 4 GB. If I choose the "About" option, it actually says that I have 8 GB installed in the host machine. This VMware image created by someone else and provided to me, apparently done with VMware Workstation 5. Why can't I allocate 8 GB? Where is the problem? In the WMware Player Version, Guest OS or Host OS? How can I solve this? I understand that for this version of player there isn't one version for 32 and another for 64 bits.

    Read the article

  • Too many connections to Sql Server 2008

    - by Luis Forero
    I have an application in C# Framework 4.0. Like many app this one connects to a data base to get information. In my case this database is SqlServer 2008 Express. The database is in my machine In my data layer I’m using Enterprise Library 5.0 When I publish my app in my local machine (App Pool Classic) Windows Professional IIS 7.5 The application works fine. I’m using this query to check the number of connections my application is creating when I’m testing it. SELECT db_name(dbid) as DatabaseName, count(dbid) as NoOfConnections, loginame as LoginName FROM sys.sysprocesses WHERE dbid > 0 AND db_name(dbid) = 'MyDataBase' GROUP BY dbid, loginame When I start testing the number of connection start growing but at some point the max number of connection is 26. I think that’s ok because the app works When I publish the app to TestMachine1 • XP Mode Virtual Machine (Windows XP Professional) • IIS 5.1 It works fine, the behavior is the same, the number of connections to the database increment to 24 or 26, after that they stay at that point no matter what I do in the application. The problem: When I publish to TestMachine2 (App Pool Classic) • Windows Server 2008 R2 • IIS 7.5 I start to test the application the number of connection to the database start to grow but this time they grow very rapidly and don’t stop growing at 24 or 26, the number of connections grow till the get to be 100 and the application stop working at that point. I have check for any difference on the publications, especially in Windows Professional and Windows Server and they seem with the same parameters and configurations. Any clues why this could be happening? , any suggestions?

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • 2008R2 Standard and Hyper-V and Ram Usage (Usable vs Available)

    - by Mark
    A new server was purchased for our development team to start utilizing the full feature set of TFS, namely Lab Management. Because of the need for Lab Management we bought a fairly beefy machine to handle this task and to also act as a build machine. I have been tasked to setup additional features TFS on this machine starting out with a build controller and eventually going towards a full out Lab Management setup using Hyper-V. My question: Upon initially logging I noticed that Windows is registering 64gb but only 32gb available. I know this is a limitation because of licencing since only Standard Edition is installed. Since Hyper-V is another layer that handles the virtualization of guest OS's is Hyper-V able to access this memory? Or is Hyper-V memory usage also limited by 2008 R2 Standard? If Hyper-V can somehow access this memory, is this how it should be setup? Or should the host 2008R2 Standard be upgraded to Enterprise so the Host can utilize the full 64gb? Before I go hog wild and using TFS I wanted to ask some experts so I don't need to reinstall the OS down the road to utilize the additional 32gb. Thanks for any help or links you can share.

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Windows 8 doesn't start up after installation

    - by Raj BD
    I have a DELL INSPIRON n5110 machine running Windows 7 Home Premium 64-bit. It has 6GB of RAM and 500 GB hard disk capable to run any Windows and Linux system. And I recently attempted to install Windows 8 in the machine. Installation goes fine and smooth until the final stage when windows is done installing and GETS DEVICES READY. When GETTNG DEVICES READY reaches about 60%, the screen goes blank with the machine running. After a while it reboots. And after the new Windows Logo with dotted circles are done showing up, the screen goes blank again (when we'd expect login screen to show up). There is not even a cursor. I can see the HARD DISK ACTIVITY LED blinking but nothing shows up in the screen. I've tried CLEAN INSTALL several times. The DVD is fine, it installed and worked well in my friends' laptop PCs. I tried installing both PRO version and ENTERPRISE version and both 32-bit and 64-bit. But the problem was same everytime. Finally, I had to reinstall Windows 7 which installed and ran without any problem. The problem, we can guess, is GRAPHICS perhaps. I have Intel SandyBridge Graphics Mobile Chipset (Intel HD Graphics 3000). But if Windows 7 and Linux distrubutions like Mint and BackTrack can run on the machine, why on earth, would Windows 8 not run?

    Read the article

  • Encryption is hard: AES encryption to Hex

    - by Rob Cameron
    So, I've got an app at work that encrypts a string using ColdFusion. ColdFusion's bulit-in encryption helpers make it pretty simple: encrypt('string_to_encrypt','key','AES','HEX') What I'm trying to do is use Ruby to create the same encrypted string as this ColdFusion script is creating. Unfortunately encryption is the most confusing computer science subject known to man. I found a couple helper methods that use the openssl library and give you a really simple encryption/decryption method. Here's the resulting string: "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" Which looks unicode-ish to me. I've tried several libraries to convert this to hex but they all say it contains invalid characters. Trying to unpack it results in this: string = "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" string.unpack('U') ArgumentError: malformed UTF-8 character from (irb):19:in `unpack' from (irb):19 At the end of the day it's supposed to look like this (the output of the ColdFusion encrypt method): F8E91A689565ED24541D2A0109F201EF Of course that's assuming that all the padding, initialization vectors, salts, cypher types and a million other possible differences all line up. Here's the simple script I'm using to encrypt/decrypt: def aes(m,k,t) (aes = OpenSSL::Cipher::Cipher.new('aes-256-cbc').send(m)).key = Digest::SHA256.digest(k) aes.update(t) << aes.final end def encrypt(key, text) aes(:encrypt, key, text) end def decrypt(key, text) aes(:decrypt, key, text) end Any help? Maybe just a simple option I can pass to OpenSSL::Cipher::Cipher that will tell it to hex-encode the final string?

    Read the article

  • NSTextField autocomplete

    - by Rasmus Styrk
    Does anyone know of any class or lib that can implement autocompletion to an NSTextField? I'am trying to get the standard autocmpletion to work but it is made as a synchronous api. I get my autocompletion words via an api call over the internet. What have i done so far is: - (void)controlTextDidChange:(NSNotification *)obj { if([obj object] == self.searchField) { [self.spinner startAnimation:nil]; [self.wordcompletionStore completeString:self.searchField.stringValue]; if(self.doingAutocomplete) return; else { self.doingAutocomplete = YES; [[[obj userInfo] objectForKey:@"NSFieldEditor"] complete:nil]; } } } When my store is done, i have a delegate that gets called: - (void) completionStore:(WordcompletionStore *)store didFinishWithWords:(NSArray *)arrayOfWords { [self.spinner stopAnimation:nil]; self.completions = arrayOfWords; self.doingAutocomplete = NO; } The code that returns the completion list to the nstextfield is: - (NSArray *)control:(NSControl *)control textView:(NSTextView *)textView completions:(NSArray *)words forPartialWordRange:(NSRange)charRange indexOfSelectedItem:(NSInteger *)index { *index = -1; return self.completions; } My problem is that this will always be 1 request behind and the completion list only shows on every 2nd char the user inputs. I have tried searching google and SO like a mad man but i cant seem to find any solutions.. Any help is much appreciated.

    Read the article

  • How to edit the XSL for RSS Viewer Webpart

    - by Nagendra
    I am using a blog site as a source for my RSS Feed. As I see the RSS feed, its showing up as the following :: Blog: Posts Test Thursday, March 04, 2010 - Body: With 25 four's and 3 sixers Sachin crosses 200 (147 balls) runs in an single ODI innings. Creates another world record. Watch the final over where he got it double hundred with MSD on the other end. This is what he had to say after getting the MOM (man of the match): I dedicate this knock to all the people of India, who have supported me throughout over the last 20 years. I was timing the ball well, and I felt that anywhere between 340 to 350 was a good target. I thought Karthik, Yusuf and Dhoni supported me well. I thought that a 200 would be possible once I crossed 175 in the 42nd over. I am enjoying my cricket at the moment. There have been a few bad decisions I have made as a batsman, but as long as the passion is there I will carry on. It feels good that I lasted the 50 overs, it was a good test of my fitness and I would like to do this once again. Well!!! Wait for more. Published: 3/4/2010 3:18 PM More... I actually wanted to remove the Body, Published parameters. I just want my XSLT to be able to show only the Description of the blog. No need to have this meta data. Can anyone help me in specifying tthe XSL changes?

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Adding a Contact with the Google Contacts .NET API

    - by Bryan
    I am using the following code to add a contact, but I get the following unhandled exception: Google.GData.Client.GDataRequestException: Execution of request failed: http://www.google.com/m8/feeds/contacts/default/full GDataCredentials myCred = new GDataCredentials("myusername", "mypassword"); RequestSettings myRequestSettings = new RequestSettings("macpapa-GoogleCodeTest3-1", myCred); ContactsRequest myContactRequest = new ContactsRequest(myRequestSettings); Contact myContact = new Contact(); myContact.Title = "Be Dazzle"; PhoneNumber myPhoneNumber = new PhoneNumber("805-453-6688"); myPhoneNumber.Rel = ContactsRelationships.IsGeneral; myPhoneNumber.Primary = true; myContact.Phonenumbers.Add(myPhoneNumber); EMail myEmail = new EMail("[email protected]", ContactsRelationships.IsHome); EMail myEmail2 = new EMail("[email protected]", ContactsRelationships.IsWork); myEmail.Primary = true; myContact.Emails.Add(myEmail); myContact.Emails.Add(myEmail2); PostalAddress postalAddress = new PostalAddress(); postalAddress.Value = "123 somewhere lane"; postalAddress.Primary = true; postalAddress.Rel = ContactsRelationships.IsHome; myContact.PostalAddresses.Add(postalAddress); Uri feedUri = new Uri(ContactsQuery.CreateContactsUri("default")); Contact createdContact = myContactRequest.Insert<Contact>(feedUri, myContact); Please offer any available suggestions. Thank you.

    Read the article

  • Getting past dates in HP-UX with ksh

    - by Alejandro Atienza Ramos
    Ok, so I need to translate a script from a nice linux & bash configuration to ksh in hp-ux. Each and every command expects a different syntax and i want to kill myself. But let's skip the rant. This is part of my script anterior=`date +"%Y%0m" -d '1 month ago'` I basically need to get a past date in format 201002. Never mind the thing that, in the new environment, %0m means "no zeroes", while actually in the other one it means "yes, please put that zero on my string". It doesn't even accept the "1 month ago". I've read the man date for HP-UX and it seems you just can't do date arithmetic with it. I've been looking around for a while but all i find are lengthy solutions. I can't quite understand that such a typical administrative task like adding dates needs so much fuss. Isn't there a way to convert my one-liner to, well, i don't know, another one? Come on, i've seen proposed solutions that used bc, had thirty plus lines and magic number all over the script. The simplest solutions seem to use perl... but i don't know how to modify them, as they're quite arcane. Thanks!

    Read the article

  • gcc: Do I need -D_REENTRANT with pthreads?

    - by stefanB
    On Linux (kernel 2.6.5) our build system calls gcc with -D_REENTRANT. Is this still required when using pthreads? How is it related to gcc -pthread option? I understand that I should use -pthread with pthreads, do I still need -D_REENTRANT? On a side note, is there any difference that you know off between the usage of REENTRANT between gcc 3.3.3 and gcc 4.x.x ? When I use -pthread gcc option I can see that _REENTRANT gets defined. Will omitting -D_REENTRANT from command line make any difference, for example could some objects be compiled without multithreaded support and then linked into binary that uses pthreads and will cause problems? I assume it should be ok just to use: g++ -pthread > echo | g++ -E -dM -c - > singlethreaded > echo | g++ -pthread -E -dM -c - > multithreaded > diff singlethreaded multithreaded 39a40 > #define _REENTRANT 1 We're compiling multiple static libraries and applications that link with the static libraries, both libraries and application use pthreads. I believe it was required at some stage in the past but want to know if it is still required. Googling hasn't returned any recent information mentioning -D_REENTRANT with pthreads. Could you point me to links or references discussing the use in recent version of kernel/gcc/pthread? Clarification: At the moment we're using -D_REENTRANT and -lpthread, I assume I can replace them with just g++ -pthread, looking at man gcc it sets the flags for both preprocessor and linker. Any thoughts?

    Read the article

  • pthreads: reader/writer locks, upgrading read lock to write lock

    - by ScaryAardvark
    I'm using read/write locks on Linux and I've found that trying to upgrade a read locked object to a write lock deadlocks. i.e. // acquire the read lock in thread 1. pthread_rwlock_rdlock( &lock ); // make a decision to upgrade the lock in threads 1. pthread_rwlock_wrlock( &lock ); // this deadlocks as already hold read lock. I've read the man page and it's quite specific. The calling thread may deadlock if at the time the call is made it holds the read-write lock (whether a read or write lock). What is the best way to upgrade a read lock to a write lock in these circumstances.. I don't want to introduce a race on the variable I'm protecting. Presumably I can create another mutex to encompass the releasing of the read lock and the acquiring of the write lock but then I don't really see the use of read/write locks. I might as well simply use a normal mutex. Thx

    Read the article

  • split video (avi/h264) on keyframe

    - by m.sr
    Hallo. I have a big video file. ffmpeg, tcprobe and other tool say, it is an h264-stream in an AVI-container. Now i'd like to cut out small chunks form the video. Problem: The index of the video seam corrupted/destroyed. I kind of fixed this via mplayer -forceidx -saveidx <IndexFile> <BigVideoFile>. The Problem here is, that I'm now stuck with mplayer/mencoder which can use this index file via -loadidx <IndexFile>. I have tried correcting the index like described in man aviindex (mplayer -frames 0 -saveidx mpidx broken.avi ; aviindex -i mpidx -o tcindex ; avimerge -x tcindex -i broken.avi -o fixed.avi), but this didn't fix my video - meaning that most tools i've tested couldn't search in the video file. Problem: I sut out parts of the video via following command: mencoder -loadidx in.idx -ss 8578 -endpos 20 -oac faac -ovc x264 -sws 9 -lavfopts format=mp4 -x264encopts <LotsOfOpts> -of lavf -vf scale=800:-10,harddup in.avi -o out.mp4. Now here the problem is, that some videos are corrupted at the beginning. I think this is because the fact, that i do not necessarily cut at keyframe. Questions: What is the best way to fix the index of an avi "inline" so that every tool can again work as expected with it? How can i split at the keyframes? Is there an mencoder-option for this? Are Keyframes coming in a frequency? How to find out this frequency? (So with a bit of math it should be possible to calculate the next keyframe and cut there) Is ther perhaps some completely other way to split this movie? Doing it by hand is no option, i've to cut out 1000+ chunks ... Thanks a lot!

    Read the article

  • Phonegap Screenshot plugin in Cordova 2.0.0

    - by ObjectiveJ
    I have set up the screenshot plugin from github, located here: https://github.com/phonegap/phonegap-plugins/tree/master/Android/Screenshot I set it up as instructed and with 1.8.1 of cordova. It worked and the screenshot was saved to the phone. However it fails with cordova 2.0.0. Screenshot.java code: https://github.com/phonegap/phonegap-plugins/blob/master/Android/Screenshot/src/org/apache/cordova/Screenshot.java Screenshot.js code: https://github.com/phonegap/phonegap-plugins/blob/master/Android/Screenshot/www/Screenshot.js Due to the advice of a very clever man called Simon MacDonald, I removed line 31 and 38 from the JS file shown above. However when I try to use the screenshot plugin with cordova 2.0.0 I receive these errors: ERROR: org.json.JSONException: Value undefined of type java.lang.String cannot be converted to JSONArray. Error: Status=8 Message=JSON error file:///android_asset/www/cordova-2.0.0.js: Line 938 : Error: Status=8 Message=JSON error Error: Status=8 Message=JSON error at file:///android_asset_/www/cordova-2.0.0.js:938 line 938 of the cordova.js is: // If error, then display error else { console.log("Error: Status="+v.status+" Message="+v.message); but im almost certain this is a compatibility error. Does anyone know a fix for this, or even a reason. Im abit lost. Any help is appreciated. I call the screenshot.js with this code: function takeScreenShot() { cordovaRef.exec("Screenshot.saveScreenshot"); } Any help massively appreciated.

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • HttpWebRequest: The request was aborted: The request was canceled.

    - by Emeka
    I've been working on developing a middle man application of sorts, which uploads text to a CMS backend using HTTP post requests for a series of dates (usually 7 at a time). I am using HttpWebRequest to accomplish this. It seems to work fine for the first date, but when it starts the second date I get the System.Net.WebException: The request was aborted: The request was canceled. I've searched around and found the following big clues: http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/0d0afe40-c62a-4089-9d8b-fb4d206434dc http://www.jaxidian.org/update/2007/05/05/8 http://arnosoftwaredev.blogspot.com/2006/09/net-20-httpwebrequestkeepalive-and.html And they haven't been too helpful. I've tried overloading the GetWebReuqest but that doesn't make sense because I don't make any use of that function. Here is my code: http://pastebin.org/115268 I get the error on line 245 after it has run successfully at least once. I'd appreciate any help I can get as this is the last step in a project I've been working on for sometime. This is my first C#/VS project so I'm open to any tips but I would like to focus on getting this problem solved first. THanks!

    Read the article

  • What is the difference between AF_INET and PF_INET constants?

    - by Denilson Sá
    Looking at examples about socket programming, we can see that some people use AF_INET while others use PF_INET. In addition, sometimes both of them are used at the same example. The question is: Is there any difference between them? Which one should we use? If you can answer that, another question would be... Why there are these two similar (but equal) constants? What I've discovered, so far: The socket manpage In (Unix) socket programming, we have the socket() function that receives the following parameters: int socket(int domain, int type, int protocol); The manpage says: The domain argument specifies a communication domain; this selects the protocol family which will be used for communication. These families are defined in <sys/socket.h>. And the manpage cites AF_INET as well as some other AF_ constants for the domain parameter. Also, at the NOTES section of the same manpage, we can read: The manifest constants used under 4.x BSD for protocol families are PF_UNIX, PF_INET, etc., while AF_UNIX etc. are used for address families. However, already the BSD man page promises: "The protocol family generally is the same as the address family", and subsequent standards use AF_* everywhere. The C headers The sys/socket.h does not actually define those constants, but instead includes bits/socket.h. This file defines around 38 AF_ constants and 38 PF_ constants like this: #define PF_INET 2 /* IP protocol family. */ #define AF_INET PF_INET Python The Python socket module is very similar to the C API. However, there are many AF_ constants but only one PF_ constant (PF_PACKET). Thus, in Python we have no choice but use AF_INET. I think this decision to include only the AF_ constants follows one of the guiding principles: "There should be one-- and preferably only one --obvious way to do it." (The Zen of Python)

    Read the article

  • Not able to use 7-Zip to compress stdin and output with stdout?

    - by acidzombie24
    I get the error "Not implemented". I want to compress a file using 7-Zip via stdin then take the data via stdout and do more conversions with my application. In the man page it shows this example: % echo foo | 7z a dummy -tgzip -si -so /dev/null I am using Windows and C#. Results: 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Creating archive StdOut System error: Not implemented Code: public static byte[] a7zipBuf(byte[] b) { string line; var p = new Process(); line = string.Format("a dummy -t7z -si -so "); p.StartInfo.Arguments = line; p.StartInfo.FileName = @"C:\Program Files\7-Zip\7z.exe"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardInput = true; p.Start(); p.StandardInput.BaseStream.Write(b, 0, b.Length); p.StandardInput.Close(); Console.Write(p.StandardError.ReadToEnd()); //Console.Write(p.StandardOutput.ReadToEnd()); return p.StandardOutput.BaseStream.ReadFully(); } Is there another simple way to read the file into memory? Right now I can 1) write to a temporary file and read (easy and can copy/paste some code) 2) use a file pipe (medium? I have never done it) 3) Something else.

    Read the article

  • POSIX AIO Library and Callback Handlers

    - by Charles Salvia
    According to the documentation on aio_read/write, there are basically 2 ways that the AIO library can inform your application that an async file I/O operation has completed. Either 1) you can use a signal, 2) you can use a callback function I think that callback functions are vastly preferable to signals, and would probably be much easier to integrate into higher-level multi-threaded libraries. Unfortunately, the documentation for this functionality is a mess to say the least. Some sources, such as the man page for the sigevent struct, indicate that you need to set the sigev_notify data member in the sigevent struct to SIGEV_CALLBACK and then provide a function handler. Presumably, the handler is invoked in the same thread. Other documentation indicates you need to set sigev_notify to SIGEV_THREAD, which will invoke the callback handler in a newly created thread. In any case, on my Linux system (Ubuntu with a 2.6.28 kernel) SIGEV_CALLBACK doesn't seem to be defined anywhere, but SIGEV_THREAD works as advertised. Unfortunately, creating a new thread to invoke the callback handler seems really inefficient, especially if you need to invoke many handlers. It would be better to use an existing pool of threads, similar to the way most network I/O event demultiplexers work. Some versions of UNIX, such as QNX, include a SIGEV_SIGNAL_THREAD flag, which allows you to invoke handlers using a specified existing thread, but this doesn't seem to be available on Linux, nor does it seem to even be a part of the POSIX standard. So, is it possible to use the POSIX AIO library in a way that invokes user handlers in a pre-allocated background thread/threadpool, rather than creating/destroying a new thread everytime a handler is invoked?

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >