Search Results

Search found 7447 results on 298 pages for 'estrategia hardware'.

Page 269/298 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Does Interlocked guarantee visibility to other threads in C# or do I still have to use volatile?

    - by Lirik
    I've been reading the answer to a similar question, but I'm still a little confused... Abel had a great answer, but this is the part that I'm unsure about: ...declaring a variable volatile makes it volatile for every single access. It is impossible to force this behavior any other way, hence volatile cannot be replaced with Interlocked. This is needed in scenarios where other libraries, interfaces or hardware can access your variable and update it anytime, or need the most recent version. Does Interlocked guarantee visibility of the atomic operation to all threads, or do I still have to use the volatile keyword on the value in order to guarantee visibility of the change? Here is my example: public class CountDownLatch { private volatile int m_remain; // <--- do I need the volatile keyword there since I'm using Interlocked? private EventWaitHandle m_event; public CountDownLatch (int count) { Reset(count); } public void Reset(int count) { if (count < 0) throw new ArgumentOutOfRangeException(); m_remain = count; m_event = new ManualResetEvent(false); if (m_remain == 0) { m_event.Set(); } } public void Signal() { // The last thread to signal also sets the event. if (Interlocked.Decrement(ref m_remain) == 0) m_event.Set(); } public void Wait() { m_event.WaitOne(); } }

    Read the article

  • Should a C++ constructor do real work?

    - by Wade Williams
    I'm strugging with some advice I have in the back of my mind but for which I can't remember the reasoning. I seem to remember at some point reading some advice (can't remember the source) that C++ constructors should not do real work. Rather, they should initialize variables only. The advice when on to explain that real work should be done in some sort of init() method, to be called separately after the instance was created. The situation is I have a class that represents a hardware device. It makes logical sense to me for the constructor to call the routines that query the device in order to build up the instance variables that describe the device. In other words, once new instantiates the object, the developer receives an object which is ready to be used, no separate call to object-init() required. Is there a good reason why constructors shouldn't do real work? Obviously it could slow allocation time, but that wouldn't be any different if calling a separate method immediately after allocation. Just trying to figure out what gotchas I not currently considering that might have lead to such advice.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • How to send KeyEvents through an input method service to a Dialog, or a Spinner menu?

    - by shutdown11
    I'm trying to implement an input method service that receives intents sent by a remote client, and in response to those sends an appropriate KeyEvent. I'm using in the input method service this method private void keyDownUp(int keyEventCode) { getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_DOWN, keyEventCode)); getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_UP, keyEventCode)); } to send KeyEvents as in the Simple Sofykeyboard Sample, and it works in the home, in Activities... but it doesn't works when a Dialog or the menu of a Spinner is in foreground. The events is sent to the parent activity behind the Dialog. Is there any way to send keys and control the device like using the hardware keys from an input method? Better explanation on what I'm trying to do: I am kind of writng an Input Method that allows to control the device from remote. I write in a client (a java application on my desktop pc) a command (for example "UP"), a server on the device with sendBroadcast() sends the intent with the information, and a receiver in the input method gets it and call keyDownUp with the keycode of the DPAD_UP key. It generally works, but when I go to an app that shows a dialog, the keyDownUp method don't sends the key event to the dialog, for example for select the yes or not buttons, but it keeps to control the activty behind the Dialog.

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Turning a series of raw images into movie frames in Android

    - by Nicholas Killewald
    I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage. Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK). Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?

    Read the article

  • How to R/W hard disk when CPU is in Protect Mode?

    - by smwikipedia
    I am doing some OS experiment. Until now, all my code utilized the real mode BIOS interrupt to manipulate hard disk and floppy. But once my code enabled the Protect Mode of the CPU, all the real mode BIOS interrupt service routine won't be available. How could I R/W the hard disk and floppy? I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop? I know that hardwares are all controlled by reading from and writing to certain control or data registers. For example, I know that the Command Block Registers of hard disk range from 0x1F0 to 0x1F7. But I am wondering whether the register addresses of so many different hardwares are the same on the PC platform? Or do I have to detect that before using them? How to detect them?? For any responses I present my deep appreciation.

    Read the article

  • Closing Windows Forms on a Touchscreen

    - by drharris
    Our clients have fat fingers, and so do we. We take touchscreen netbooks apart to insert them into our custom hardware, and I write a software interface that shows up on the touchscreen. The problem is that it has about a 3/4" bezel over the screen, which means hitting that little red "X" becomes a challenge, especially considering reduced capacitive ability on the edges and corners. Is there a way to make this standard close button larger? Of course in the application I can always make really nice 80x80 buttons that are perfectly usable, but there seems to be no way to override the default frame of the form. We have tried enabling Large Fonts and all the built-in accessibility features, but nothing seems to make it large enough to hit successfully. Simply adding a toolbar button is also not much of an option. We prefer to utilize the standard look and feel of a normal Windows application. Alternatively, should we be looking at making some sort of "kiosk mode" where we simply go fullscreen and do nothing involving the taskbar or title bar? How difficult is this to accomplish, if so?

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Compiling scalafx for Java 7u7 (that contains JavaFX 2.2) on OS X

    - by akauppi
    The compilation instructions of scalafx says to do: export JAVAFX_HOME=/Path/To/javafx-sdk2.1.0-beta sbt clean compile package make-pom package-src However, with the new packaging of JavaFX as part of the Java JDK itself (i.e. 7u7 for OS X) there no longer seems to be such a 'javafx-sdkx.x.x' folder. The Oracle docs say that JavaFX JDK is placed alongside the main Java JDK (in same folders). So I do: $ export JAVAFX_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk $ sbt clean [warn] Using project/plugins/ (/Users/asko/Sources/scalafx/project/plugins) for plugin configuration is deprecated. [warn] Put .sbt plugin definitions directly in project/, [warn] .scala plugin definitions in project/project/, [warn] and remove the project/plugins/ directory. [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins/project [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins [error] java.lang.NullPointerException [error] Use 'last' for the full log. Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? Am I doing something wrong or is scalafx not yet compatible with the latest Java release (7u7, JavaFX 2.2). What can I do? http://code.google.com/p/scalafx/ Addendum ..and finally (following Igor's solution below) sbt run launches the colorful circles demo easily (well, if one has a supported GPU that is). Oracle claims that "JavaFX supports graphic hardware acceleration on any Mac OS X system that is Lion or later" but I am inclined to think the NVidia powered Mac Mini I'm using does software rendering. A recent MacBook Air (core i7) is a complete different beast! :)

    Read the article

  • How do I insert this subclass into my code?

    - by BamsBamx
    This is a very noob question so I hope you can help me with this... This is my built code: public class PantallaOpciones extends PreferenceActivity { private SharedPreferences preferences; @Override public void onCreate(Bundle savedInstanceState) { preferences = PreferenceManager.getDefaultSharedPreferences(this); findPreference("speechkeycode").setOnPreferenceClickListener(keycodedialog); Preference.OnPreferenceClickListener keycodedialog = new Preference.OnPreferenceClickListener(){ public boolean onPreferenceClick(Preference preference){ keycodedialog(); return false; }}; } private void keycodedialog(){ final Dialog dialog = new Dialog(this); dialog.setContentView(R.layout.keycodedialog); dialog.setTitle("Speech keycode"); final TextView keypresstext = (TextView) findViewById(R.id.keypresstext); Button savekeycode = (Button) dialog.findViewById(R.id.btnsavekeycode); savekeycode.setOnClickListener(new OnClickListener() { public void onClick(View v) { dialog.dismiss(); } }); Button resetkeycode = (Button) dialog.findViewById(R.id.btnresetvalue); resetkeycode.setOnClickListener(new OnClickListener() { public void onClick(View v) { dialog.dismiss(); } }); dialog.show(); } Okay, now I want to add this code to dialog: public boolean onKeyDown(int keyCode, KeyEvent event) { //SOME STUFF return super.onKeyDown(keyCode, event); } So I want to listen to a keypress when dialog is opened and show the keycode of hardware press by using textview.settext()... The question is: how do I insert public boolean onKeyDown into the dialog??? Thanks in advance!! :)

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Creating a network adapter - how hard is it?

    - by Vilx-
    I'm interested in building a little (commercial) device on top of Arduino. I want it to be able to interface with network. Network as in standard Ethernet, Cat5, RJ-45, etc. I know that there is an Ethernet Shield, but it costs even more than the Arduino itself, and it's pretty big. Naturally, I want my device to be as small and as cheap as possible. So I'm thinking about recreating an Ethernet module myself. The problem is - I haven't got any experience with Ethernet, nor do I have a good idea where to start looking. Thus I can't even say if my ideas are feasible. Ultimately I would like the device to have three ports - one for incoming signal, two for outgoing, so the device is essentially a little switch where it is plugged in itself as well. The switching capabilities need not be very fast - the volume of data will be low. 10Mbit is more than enough, can be even slower. If that is not possible, a single port for controlling the device itself will also do. Another possibility I'm considering is power line communications - sending information through power lines. That's another area I've no experience with. What hardware should I be looking at, and where can I find information about the necessary software? So - can anyone tell me if these ideas are feasible, and if yes - where should I start looking?

    Read the article

  • Migrating MachineKey from iis6 on old server to iis7 on new server

    - by MaseBase
    I am migrating our hosting environment to a totally new data center with new boxes and hardware and software... the whole deal. Our website cookies are encrypted using the machineKey, so when I make a request to my domain and point it to the new web server (by overriding the local hosts file), I get an error because the cookie cannot be decrypted, since the Machine Key is different. I'd like to avoid any problems a frequent user might have when they arrive at the new server for the first time. To the best of my knowledge, at this point I think I need to set the same MachineKey from our current servers on our new servers. This way when past visitors with a cookie arrive at our website served by the new server, the cookie will be decrypted properly with the MachineKey it was encrypted with and then log them in properly. My question is where do I find my MachineKey value (in IIS 6 win2k3 server) so I can use that value to set it statically on my new servers? I've pulled up my machine.config file, but it doesn't specify the key, it only specifies a configSection where the key can be defined. It's not in my web.config for the app or elsewhere. I did find this great article on some MachineKey and Web Garden woes (which could explain some other bugs I've been experiencing with regard to the machineKey). Update I am back to this issue and am still faced with a similar problem. I have the MachineKey auto-generated on the IIS6 server but I need to get that exact key so I can set it explicitly and not have it auto-generated anymore. Any help is appreciated...

    Read the article

  • pthread_join from a signal handler

    - by liv2hak
    I have a capture program which in addition do capturing data and writing it into a file also prints some statistics.The function that prints the statistics static void* report(void) { /*Print statistics*/ } is called roughly every second using an ALARM that expires every second.So The program is like void capture_program() { pthread_t report_thread while(!exit_now) { if(pthread_create(&report_thread,NULL,report,NULL)){ fprintf(stderr,"Error creating reporting thread! \n"); } /* Capturing code -------------- -------------- */ if(doreport) usleep(5); } } void *report(void *param) { while(true) { if(doreport) { doreport = 0 //access some register from hardware usleep(5) } } } The expiry of the timer sets the doreport flag.If this flag is set report() is called which clears the flag.I am using usleep to alternate between two threads in the program.This seems to work fine. I also have a signal handler to handle SIGINT (i.e CTRL+C) static void anysig(int sig) { if (sig != SIGINT) dagutil_set_signal_handler(SIG_DFL); /* Tell the main loop to exit */ exit_now = 1; return; } My question: 1) Is it safe to call pthread_join from inside the signal handler? 2) Should I use exit_now flag for the report thread as well?

    Read the article

  • How to determine if the camera button is half pressed

    - by Matthew
    I am creating a small test camera application, and I would like to be able to implement a feature that allows focus text bars to be present on the screen while the hardware camera button is pressed half way down. I created a camera_ButtonHalfPress event to perform the focus action, but I am unsure of how to toggle the text bars I would like to show on the screen accordingly. Essentially, my goal would be to show the text bars while the camera button is pressed half way down, and then remove them if the button is pressed all the way or the button is released before being pressed all the way down. The button being released is the part I am having trouble with. What I have is as follows: MainPage.xaml.cs private void camera_ButtonHalfPress(object sender, EventArgs e) { //camera.Focus(); // Show the focus brackets. focusBrackets.Visibility = Visibility.Visible; } } private void camera_ButtonFullPress(object sender, EventArgs e) { // Hide the focus brackets. focusBrackets.Visibility = Visibility.Collapsed; camera.CaptureImage(); } } Currently, if the the user decides to release the camera button before it is pressed all the way, the focus brackets persist on the screen. How might I fix this issue?

    Read the article

  • Prompting for authentication from a wxPython program and passing it along to IIS?

    - by MetaHyperBolic
    I have a client (written in Python, with a wxPython front end in dead-simple wizard style) which communicates a website running IIS. A python script receives requests and does the usual client-server dance. I would have written this as a browser application, but for the requirement that certain things happen on the local PC that the web can't help with (file manipulation, interfacing with certain USB hardware, etc.) Right now, I am simply using the logon credentials, compounded as a string from os.environ['USERDOMAIN'] and os.environ['USERNAME'], to pass along to the server, which connects to Active Directory and enumerates the members of the group, looking for those logon credentials. It's an ugly hack, but it works. Obviously, I could make people log out of the generic helper accounts and log back into Windows using specific accounts. However, I wondered how feasible it would be to provide some kind of logon prompt wherein the user can type in a name and password, then some kind of authorization token could be passed on to IIS. This seems like something I would not want to do myself, given that amateurs almost always make huge security mistakes. Now you can see why I am wishing this was purely web-based. What's a good way to handle this?

    Read the article

  • c# STILL returning wrong number of cores

    - by Justin
    Ok, so I posted in In C# GetEnvironmentVariable("NUMBER_OF_PROCESSORS") returns the wrong number asking about how to get the correct number of cores in C#. Some helpful people directed me to a couple of questions where similar questions were asked but I have already tried those solutions. My question was then closed as being the same as another question, which is true, it is, but the solution given there didn't work. So I'm opening another one hoping that someone may be able to help realising that the other solutions DID NOT work. That question was How to find the Number of CPU Cores via .NET/C#? which used WMI to try to get the correct number of cores. Well, here's the output from the code given there: Number Of Cores: 32 Number Of Logical Processors: 32 Number Of Physical Processors: 4 As per my last question, the machine is a 64 core AMD Opteron 6276 (4x16 cores) running Windows Server 2008 R2 HPC edition. Regardless of what I do Windows always seems to return 32 cores even though 64 are available. I have confirmed the machine is only using 32 and if I hardcode 64 cores, then the machine uses all of them. I'm wondering if there might be an issue with the way the AMD CPUs are detected. FYI, in case you haven't read the last question, if I type echo %NUMBER_OF_PROCESSORS" at the command line, it returns 64. It just won't do it in a programming environment. Thanks, Justin UPDATE: Outputting PROCESSOR_ARCHITECTURE returns AMD64 from the command line, but x86 from the program. The program is 32-bit running on 64-bit hardware. I was asked to compile it to 64-bit but it still shows 32 cores.

    Read the article

  • Accelerometer stops delivering samples when the screen is off on Droid/Nexus One even with a WakeLoc

    - by William
    I have some code that extends a service and records onSensorChanged(SensorEvent event) accelerometer sensor readings on Android. I would like to be able to record these sensor readings even when the device is off (I'm careful with battery life and it's made obvious when it's running). While the screen is on the logging works fine on a 2.0.1 Motorola Droid and a 2.1 Nexus One. However, when the phone goes to sleep (by pushing the power button) the screen turns off and the onSensorChanged events stop being delivered (verified by using a Log.e message every N times onSensorChanged gets called). The service acquires a wakeLock to ensure that it keeps running in the background; but, it doesn't seem to have any effect. I've tried all the various PowerManager. wake locks but none of them seem to matter. _WakeLock = _PowerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "My Tag"); _WakeLock.acquire(); There have been conflicting reports about whether or not you can actually get data from the sensors while the screen is off... anyone have any experience with this on a more modern version of Android (Eclair) and hardware? This seems to indicate that it was working in Cupcake: http://groups.google.com/group/android-developers/msg/a616773b12c2d9e5 Thanks! PS: The exact same code works as intended in 1.5 on a G1. The logging continues when the screen turns off, when the application is in the background, etc.

    Read the article

  • How do software events work internally?

    - by Duddle
    Hello! I am a student of Computer Science and have learned many of the basic concepts of what is going on "under the hood" while a computer program is running. But recently I realized that I do not understand how software events work efficiently. In hardware, this is easy: instead of the processor "busy waiting" to see if something happened, the component sends an interrupt request. But how does this work in, for example, a mouse-over event? My guess is as follows: if the mouse sends a signal ("moved"), the operating system calculates its new position p, then checks what program is being drawn on the screen, tells that program position p, then the program itself checks what object is at p, checks if any event handlers are associated with said object and finally fires them. That sounds terribly inefficient to me, since a tiny mouse movement equates to a lot of cpu context switches (which I learned are relatively expensive). And then there are dozens of background applications that may want to do stuff of their own as well. Where is my intuition failing me? I realize that even "slow" 500MHz processors do 500 million operations per second, but still it seems too much work for such a simple event. Thanks in advance!

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >