Search Results

Search found 8902 results on 357 pages for 'hardware virtualization'.

Page 328/357 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Retrieving license type (linux/windows/windows+sqlserver) for an Amazon EC2 instance via the API?

    - by Geir
    I need to calculate the hourly running costs for my Amazon EC2 instances. This varies even between instances with same hardware configs (instance types) because I use different amazon images (AMIs): some plain windows server and some windows server with sql server (both of them have additional costs compared with plain linux instances) The EC2 Java API has a describeInstances() method which returns Instance objects with metadata such as instance id, instance type (m1.small/large...), state (running,stopped..) public ip, etc. This Instance object also has a .getLicense().getPool() which according to the Java API should return "The license pool from which this license was used (ex: 'windows')." I thought this is were it may also give 'windows+sqlserver' or something to that effect. The getLicense() method does however return null.. I've navigated around the EC2 web console, not being able to find this information, but I'm hoping that it is possible - otherwise it would mean that you cannot identify the true hourly cost of an particular instance unless you know which AMI was used to create it in the first place (plain windows server or windows server with sql server). Anyone? Thanks :) /Geir

    Read the article

  • Real time location tracking - windows program or browser based?

    - by mawg
    I want to track a few hundred, maybe a few thousand people in real time. Let's say that the hardware aspects are sorted out and I can get the data into a database. Now, I want to get it out and show it, in real-time. Weeeell ... "real-enough" time. Let's say that I want to draw a floorplan of a building and plot everyone every 1 to 5 seconds. (I might want to show only certain "kinds" of people at the click of a button; I will need datamining, etc, but let's stick with the worse case scenario). I am comfortable enough with PHP, though not this sort of thing. I personally would be happier with a windows app coded in Delphi, but the trend seems to be to make everything browser based. So, the question, I guess is whether a browser can handle this and whether there are compelling arguments for a windows-based or browser-based solution. If browser-based can handle this (displaying a few thousand data-points a second), and there are no overwhelming arguments for windows then I guess I will go for browser-based and learn a few new tricks. The obvious advantage being that I could also re-use a large part of my code for (vehicle) tracking on Google maps.

    Read the article

  • MapView EXC_BAD_ACCESS (SIGSEGV) and KERN_INVALID_ADDRESS

    - by user768113
    I'm having some 'issues' with my application... well, it crashes in an UIViewController that is presented modally, there the user enters information through UITextFields and his location is tracked by a MapView. Lets call this view controller "MapViewController" When the user submits the form, I call a different ViewController - modally again - that processes this info and a third one answers accordingly, then go back to a MenuVC using unwinding segues, which then calls MapViewController and so on. This sequence is repeated many times, but it always crashes in MapViewController. Looking at the crash log, I think that the MapView can be the problem of this or some element in the UI (because of the UIKit framework). I tried to use NSZombie in order to track a memory issue but it doesn't give me a clue about whats happening. Here is the crash log Hardware Model: iPad3,4 Process: MyApp [2253] OS Version: iOS 6.1.3 (10B329) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000044 Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 IMGSGX554GLDriver 0x328b9be0 0x328ac000 + 56288 1 IMGSGX554GLDriver 0x328b9b8e 0x328ac000 + 56206deallocated instance 2 IMGSGX554GLDriver 0x328bc2f2 0x328ac000 + 66290 3 IMGSGX554GLDriver 0x328baf44 0x328ac000 + 61252 4 libGPUSupportMercury.dylib 0x370f86be 0x370f6000 + 9918 5 GLEngine 0x34ce8bd2 0x34c4f000 + 629714 6 GLEngine 0x34cea30e 0x34c4f000 + 635662 7 GLEngine 0x34c8498e 0x34c4f000 + 219534 8 GLEngine 0x34c81394 0x34c4f000 + 205716 9 VectorKit 0x3957f4de 0x394c7000 + 754910 10 VectorKit 0x3955552e 0x394c7000 + 582958 11 VectorKit 0x394d056e 0x394c7000 + 38254 12 VectorKit 0x394d0416 0x394c7000 + 37910 13 VectorKit 0x394cb7ca 0x394c7000 + 18378 14 VectorKit 0x394c9804 0x394c7000 + 10244 15 VectorKit 0x394c86a2 0x394c7000 + 5794 16 QuartzCore 0x354a07a4 0x35466000 + 239524 17 QuartzCore 0x354a06fc 0x35466000 + 239356 18 IOMobileFramebuffer 0x376f8fd4 0x376f4000 + 20436 19 IOKit 0x344935aa 0x34490000 + 13738 20 CoreFoundation 0x33875888 0x337e9000 + 575624 21 CoreFoundation 0x338803e4 0x337e9000 + 619492 22 CoreFoundation 0x33880386 0x337e9000 + 619398 23 CoreFoundation 0x3387f20a 0x337e9000 + 614922 24 CoreFoundation 0x337f2238 0x337e9000 + 37432 25 CoreFoundation 0x337f20c4 0x337e9000 + 37060 26 GraphicsServices 0x373ad336 0x373a8000 + 21302 27 UIKit 0x3570e2b4 0x356b7000 + 357044 28 MyApp 0x000ea12e 0xe9000 + 4398 29 MyApp 0x000ea0e4 0xe9000 + 4324 I think thats all, additionally, I would like to ask you: if you are using unwind segues then you are releasing view controllers from the memory heap, right? Meanwhile, performing segues let you instantiate those controllers. Technically, MenuVC should be the only VC alive in the heap during the app life cycle if you understand me.

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Does Interlocked guarantee visibility to other threads in C# or do I still have to use volatile?

    - by Lirik
    I've been reading the answer to a similar question, but I'm still a little confused... Abel had a great answer, but this is the part that I'm unsure about: ...declaring a variable volatile makes it volatile for every single access. It is impossible to force this behavior any other way, hence volatile cannot be replaced with Interlocked. This is needed in scenarios where other libraries, interfaces or hardware can access your variable and update it anytime, or need the most recent version. Does Interlocked guarantee visibility of the atomic operation to all threads, or do I still have to use the volatile keyword on the value in order to guarantee visibility of the change? Here is my example: public class CountDownLatch { private volatile int m_remain; // <--- do I need the volatile keyword there since I'm using Interlocked? private EventWaitHandle m_event; public CountDownLatch (int count) { Reset(count); } public void Reset(int count) { if (count < 0) throw new ArgumentOutOfRangeException(); m_remain = count; m_event = new ManualResetEvent(false); if (m_remain == 0) { m_event.Set(); } } public void Signal() { // The last thread to signal also sets the event. if (Interlocked.Decrement(ref m_remain) == 0) m_event.Set(); } public void Wait() { m_event.WaitOne(); } }

    Read the article

  • Should a C++ constructor do real work?

    - by Wade Williams
    I'm strugging with some advice I have in the back of my mind but for which I can't remember the reasoning. I seem to remember at some point reading some advice (can't remember the source) that C++ constructors should not do real work. Rather, they should initialize variables only. The advice when on to explain that real work should be done in some sort of init() method, to be called separately after the instance was created. The situation is I have a class that represents a hardware device. It makes logical sense to me for the constructor to call the routines that query the device in order to build up the instance variables that describe the device. In other words, once new instantiates the object, the developer receives an object which is ready to be used, no separate call to object-init() required. Is there a good reason why constructors shouldn't do real work? Obviously it could slow allocation time, but that wouldn't be any different if calling a separate method immediately after allocation. Just trying to figure out what gotchas I not currently considering that might have lead to such advice.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • How to send KeyEvents through an input method service to a Dialog, or a Spinner menu?

    - by shutdown11
    I'm trying to implement an input method service that receives intents sent by a remote client, and in response to those sends an appropriate KeyEvent. I'm using in the input method service this method private void keyDownUp(int keyEventCode) { getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_DOWN, keyEventCode)); getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_UP, keyEventCode)); } to send KeyEvents as in the Simple Sofykeyboard Sample, and it works in the home, in Activities... but it doesn't works when a Dialog or the menu of a Spinner is in foreground. The events is sent to the parent activity behind the Dialog. Is there any way to send keys and control the device like using the hardware keys from an input method? Better explanation on what I'm trying to do: I am kind of writng an Input Method that allows to control the device from remote. I write in a client (a java application on my desktop pc) a command (for example "UP"), a server on the device with sendBroadcast() sends the intent with the information, and a receiver in the input method gets it and call keyDownUp with the keycode of the DPAD_UP key. It generally works, but when I go to an app that shows a dialog, the keyDownUp method don't sends the key event to the dialog, for example for select the yes or not buttons, but it keeps to control the activty behind the Dialog.

    Read the article

  • Turning a series of raw images into movie frames in Android

    - by Nicholas Killewald
    I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage. Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK). Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Closing Windows Forms on a Touchscreen

    - by drharris
    Our clients have fat fingers, and so do we. We take touchscreen netbooks apart to insert them into our custom hardware, and I write a software interface that shows up on the touchscreen. The problem is that it has about a 3/4" bezel over the screen, which means hitting that little red "X" becomes a challenge, especially considering reduced capacitive ability on the edges and corners. Is there a way to make this standard close button larger? Of course in the application I can always make really nice 80x80 buttons that are perfectly usable, but there seems to be no way to override the default frame of the form. We have tried enabling Large Fonts and all the built-in accessibility features, but nothing seems to make it large enough to hit successfully. Simply adding a toolbar button is also not much of an option. We prefer to utilize the standard look and feel of a normal Windows application. Alternatively, should we be looking at making some sort of "kiosk mode" where we simply go fullscreen and do nothing involving the taskbar or title bar? How difficult is this to accomplish, if so?

    Read the article

  • How to R/W hard disk when CPU is in Protect Mode?

    - by smwikipedia
    I am doing some OS experiment. Until now, all my code utilized the real mode BIOS interrupt to manipulate hard disk and floppy. But once my code enabled the Protect Mode of the CPU, all the real mode BIOS interrupt service routine won't be available. How could I R/W the hard disk and floppy? I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop? I know that hardwares are all controlled by reading from and writing to certain control or data registers. For example, I know that the Command Block Registers of hard disk range from 0x1F0 to 0x1F7. But I am wondering whether the register addresses of so many different hardwares are the same on the PC platform? Or do I have to detect that before using them? How to detect them?? For any responses I present my deep appreciation.

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Compiling scalafx for Java 7u7 (that contains JavaFX 2.2) on OS X

    - by akauppi
    The compilation instructions of scalafx says to do: export JAVAFX_HOME=/Path/To/javafx-sdk2.1.0-beta sbt clean compile package make-pom package-src However, with the new packaging of JavaFX as part of the Java JDK itself (i.e. 7u7 for OS X) there no longer seems to be such a 'javafx-sdkx.x.x' folder. The Oracle docs say that JavaFX JDK is placed alongside the main Java JDK (in same folders). So I do: $ export JAVAFX_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk $ sbt clean [warn] Using project/plugins/ (/Users/asko/Sources/scalafx/project/plugins) for plugin configuration is deprecated. [warn] Put .sbt plugin definitions directly in project/, [warn] .scala plugin definitions in project/project/, [warn] and remove the project/plugins/ directory. [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins/project [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins [error] java.lang.NullPointerException [error] Use 'last' for the full log. Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? Am I doing something wrong or is scalafx not yet compatible with the latest Java release (7u7, JavaFX 2.2). What can I do? http://code.google.com/p/scalafx/ Addendum ..and finally (following Igor's solution below) sbt run launches the colorful circles demo easily (well, if one has a supported GPU that is). Oracle claims that "JavaFX supports graphic hardware acceleration on any Mac OS X system that is Lion or later" but I am inclined to think the NVidia powered Mac Mini I'm using does software rendering. A recent MacBook Air (core i7) is a complete different beast! :)

    Read the article

  • How do I insert this subclass into my code?

    - by BamsBamx
    This is a very noob question so I hope you can help me with this... This is my built code: public class PantallaOpciones extends PreferenceActivity { private SharedPreferences preferences; @Override public void onCreate(Bundle savedInstanceState) { preferences = PreferenceManager.getDefaultSharedPreferences(this); findPreference("speechkeycode").setOnPreferenceClickListener(keycodedialog); Preference.OnPreferenceClickListener keycodedialog = new Preference.OnPreferenceClickListener(){ public boolean onPreferenceClick(Preference preference){ keycodedialog(); return false; }}; } private void keycodedialog(){ final Dialog dialog = new Dialog(this); dialog.setContentView(R.layout.keycodedialog); dialog.setTitle("Speech keycode"); final TextView keypresstext = (TextView) findViewById(R.id.keypresstext); Button savekeycode = (Button) dialog.findViewById(R.id.btnsavekeycode); savekeycode.setOnClickListener(new OnClickListener() { public void onClick(View v) { dialog.dismiss(); } }); Button resetkeycode = (Button) dialog.findViewById(R.id.btnresetvalue); resetkeycode.setOnClickListener(new OnClickListener() { public void onClick(View v) { dialog.dismiss(); } }); dialog.show(); } Okay, now I want to add this code to dialog: public boolean onKeyDown(int keyCode, KeyEvent event) { //SOME STUFF return super.onKeyDown(keyCode, event); } So I want to listen to a keypress when dialog is opened and show the keycode of hardware press by using textview.settext()... The question is: how do I insert public boolean onKeyDown into the dialog??? Thanks in advance!! :)

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Creating a network adapter - how hard is it?

    - by Vilx-
    I'm interested in building a little (commercial) device on top of Arduino. I want it to be able to interface with network. Network as in standard Ethernet, Cat5, RJ-45, etc. I know that there is an Ethernet Shield, but it costs even more than the Arduino itself, and it's pretty big. Naturally, I want my device to be as small and as cheap as possible. So I'm thinking about recreating an Ethernet module myself. The problem is - I haven't got any experience with Ethernet, nor do I have a good idea where to start looking. Thus I can't even say if my ideas are feasible. Ultimately I would like the device to have three ports - one for incoming signal, two for outgoing, so the device is essentially a little switch where it is plugged in itself as well. The switching capabilities need not be very fast - the volume of data will be low. 10Mbit is more than enough, can be even slower. If that is not possible, a single port for controlling the device itself will also do. Another possibility I'm considering is power line communications - sending information through power lines. That's another area I've no experience with. What hardware should I be looking at, and where can I find information about the necessary software? So - can anyone tell me if these ideas are feasible, and if yes - where should I start looking?

    Read the article

  • Migrating MachineKey from iis6 on old server to iis7 on new server

    - by MaseBase
    I am migrating our hosting environment to a totally new data center with new boxes and hardware and software... the whole deal. Our website cookies are encrypted using the machineKey, so when I make a request to my domain and point it to the new web server (by overriding the local hosts file), I get an error because the cookie cannot be decrypted, since the Machine Key is different. I'd like to avoid any problems a frequent user might have when they arrive at the new server for the first time. To the best of my knowledge, at this point I think I need to set the same MachineKey from our current servers on our new servers. This way when past visitors with a cookie arrive at our website served by the new server, the cookie will be decrypted properly with the MachineKey it was encrypted with and then log them in properly. My question is where do I find my MachineKey value (in IIS 6 win2k3 server) so I can use that value to set it statically on my new servers? I've pulled up my machine.config file, but it doesn't specify the key, it only specifies a configSection where the key can be defined. It's not in my web.config for the app or elsewhere. I did find this great article on some MachineKey and Web Garden woes (which could explain some other bugs I've been experiencing with regard to the machineKey). Update I am back to this issue and am still faced with a similar problem. I have the MachineKey auto-generated on the IIS6 server but I need to get that exact key so I can set it explicitly and not have it auto-generated anymore. Any help is appreciated...

    Read the article

  • pthread_join from a signal handler

    - by liv2hak
    I have a capture program which in addition do capturing data and writing it into a file also prints some statistics.The function that prints the statistics static void* report(void) { /*Print statistics*/ } is called roughly every second using an ALARM that expires every second.So The program is like void capture_program() { pthread_t report_thread while(!exit_now) { if(pthread_create(&report_thread,NULL,report,NULL)){ fprintf(stderr,"Error creating reporting thread! \n"); } /* Capturing code -------------- -------------- */ if(doreport) usleep(5); } } void *report(void *param) { while(true) { if(doreport) { doreport = 0 //access some register from hardware usleep(5) } } } The expiry of the timer sets the doreport flag.If this flag is set report() is called which clears the flag.I am using usleep to alternate between two threads in the program.This seems to work fine. I also have a signal handler to handle SIGINT (i.e CTRL+C) static void anysig(int sig) { if (sig != SIGINT) dagutil_set_signal_handler(SIG_DFL); /* Tell the main loop to exit */ exit_now = 1; return; } My question: 1) Is it safe to call pthread_join from inside the signal handler? 2) Should I use exit_now flag for the report thread as well?

    Read the article

  • How to determine if the camera button is half pressed

    - by Matthew
    I am creating a small test camera application, and I would like to be able to implement a feature that allows focus text bars to be present on the screen while the hardware camera button is pressed half way down. I created a camera_ButtonHalfPress event to perform the focus action, but I am unsure of how to toggle the text bars I would like to show on the screen accordingly. Essentially, my goal would be to show the text bars while the camera button is pressed half way down, and then remove them if the button is pressed all the way or the button is released before being pressed all the way down. The button being released is the part I am having trouble with. What I have is as follows: MainPage.xaml.cs private void camera_ButtonHalfPress(object sender, EventArgs e) { //camera.Focus(); // Show the focus brackets. focusBrackets.Visibility = Visibility.Visible; } } private void camera_ButtonFullPress(object sender, EventArgs e) { // Hide the focus brackets. focusBrackets.Visibility = Visibility.Collapsed; camera.CaptureImage(); } } Currently, if the the user decides to release the camera button before it is pressed all the way, the focus brackets persist on the screen. How might I fix this issue?

    Read the article

  • Prompting for authentication from a wxPython program and passing it along to IIS?

    - by MetaHyperBolic
    I have a client (written in Python, with a wxPython front end in dead-simple wizard style) which communicates a website running IIS. A python script receives requests and does the usual client-server dance. I would have written this as a browser application, but for the requirement that certain things happen on the local PC that the web can't help with (file manipulation, interfacing with certain USB hardware, etc.) Right now, I am simply using the logon credentials, compounded as a string from os.environ['USERDOMAIN'] and os.environ['USERNAME'], to pass along to the server, which connects to Active Directory and enumerates the members of the group, looking for those logon credentials. It's an ugly hack, but it works. Obviously, I could make people log out of the generic helper accounts and log back into Windows using specific accounts. However, I wondered how feasible it would be to provide some kind of logon prompt wherein the user can type in a name and password, then some kind of authorization token could be passed on to IIS. This seems like something I would not want to do myself, given that amateurs almost always make huge security mistakes. Now you can see why I am wishing this was purely web-based. What's a good way to handle this?

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >