Search Results

Search found 1931 results on 78 pages for 'bsd sockets'.

Page 42/78 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors - it has a NEC uPD720102 chipset. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine? Edit: Updated the USB 2.0 info with reference to actual card - http://www.xpcgear.com/lpnec4u.html

    Read the article

  • Using video ports 'backwards'

    - by ACarter
    If I want to connect my laptop, which has a VGA output, to a monitor, I plug a VGA cable into both ends. The output being my laptop, and the input the monitor VGA output sockets are the same as VGA input sockets however, so what if I want to use my laptop as the monitor, with the video being outputted from somewhere else? As I said, VGA's output is the same as input, so in theory, I have the hardware in my laptop to do this. But presumably I need some software. So can I use the VGA port on a laptop as a video input? (And also, can this be done with HDMI?)

    Read the article

  • Receiving Multicast Messages on a Multihomed Windows PC

    - by Basti
    I'm developing a diagnostic tool on a PC with several Network Interfaces based on multicast/udp. The user can select a NIC, the application creates sockets, binds them to this NIC and adds them to the specific multicast group. The sending of multicast messages works fine. However receiving of messages only succeeds if I bind the sockets to a specific NIC of my PC. It almost looks like as there is a 'default' NIC for receiving multicast messages in Windows which is always the first NIC returned by the GetAdapterInfo function. I monitored the network with Wireshark and discovered that the "IGMP Join Group" message isn't sent from the NIC I bound the socket at, but by this 'default' NIC. If I disable this NIC (or remove the network cable), the next NIC of the list returned by GetAdapterInfo is used for receiving multicast messages. I was successful to change this 'default' NIC by adding an additional entry to the routing table of my PC, but I don't think this is a good solution of the problem. The problem also occurs with the code appended below. The join group messages isn't sent via 192.168.52 but via a different NIC. // socket_tst.cpp : Defines the entry point for the console application. // #include tchar.h #include winsock2.h #include ws2ipdef.h #include IpHlpApi.h #include IpTypes.h #include stdio.h int _tmain(int argc, _TCHAR* argv[]) { WSADATA m_wsaData; SOCKET m_socket; sockaddr_in m_sockAdr; UINT16 m_port = 319; u_long m_interfaceAdr = inet_addr("192.168.1.52"); u_long m_multicastAdr = inet_addr("224.0.0.107"); int returnValue = WSAStartup(MAKEWORD(2,2), &m_wsaData); if (returnValue != S_OK) { return returnValue; } // Create sockets if (INVALID_SOCKET == (m_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) ) { return WSAGetLastError(); } int doreuseaddress = TRUE; if (setsockopt(m_socket,SOL_SOCKET,SO_REUSEADDR,(char*) &doreuseaddress,sizeof(doreuseaddress)) == SOCKET_ERROR) { return WSAGetLastError(); } // Configure socket addresses memset(&m_sockAdr,0,sizeof(m_sockAdr)); m_sockAdr.sin_family = AF_INET; m_sockAdr.sin_port = htons(m_port); m_sockAdr.sin_addr.s_addr = m_interfaceAdr; //bind sockets if ( bind( m_socket, (SOCKADDR*) &m_sockAdr, sizeof(m_sockAdr) ) == SOCKET_ERROR ) { return WSAGetLastError(); } // join multicast struct ip_mreq_source imr; memset(&imr,0,sizeof(imr)); imr.imr_multiaddr.s_addr = m_multicastAdr; // address of multicastgroup imr.imr_sourceaddr.s_addr = 0; // sourceaddress (not used) imr.imr_interface.s_addr = m_interfaceAdr; // interface address /* first join multicast group, then registerer selected interface as * multicast sending interface */ if( setsockopt( m_socket ,IPPROTO_IP ,IP_ADD_MEMBERSHIP ,(char*) &imr , sizeof(imr)) == SOCKET_ERROR) { return SOCKET_ERROR; } else { if( setsockopt(m_socket ,IPPROTO_IP ,IP_MULTICAST_IF ,(CHAR*)&imr.imr_interface.s_addr ,sizeof(&imr.imr_interface.s_addr)) == SOCKET_ERROR ) { return SOCKET_ERROR; } } printf("receiving msgs...\n"); while(1) { // get inputbuffer from socket int sock_return = SOCKET_ERROR; sockaddr_in socketAddress; char buffer[1500]; int addressLength = sizeof(socketAddress); sock_return = recvfrom(m_socket, (char*) &buffer, 1500, 0, (SOCKADDR*)&socketAddress, &addressLength ); if( sock_return == SOCKET_ERROR) { int wsa_error = WSAGetLastError(); return wsa_error; } else { printf("got message!\n"); } } return 0; } Thanks four your help!

    Read the article

  • Why is my emit not getting called?

    - by cRaZiRiCaN
    The client and server connect just fine. For some reason the emit on my client is not firing correctly. I am trying to get the testEmit and testEmit2 working. This is my server: express = require 'express' mongo = require 'mongodb' app = express() server = (require 'http').createServer(app) io = (require 'socket.io').listen(server) server.listen(8080) app.use(express.static(__dirname + '/public')) # db = new mongo.Db("documentsdb", new mongo.Server("localhost", 27017, auto_reconnect: true), {safe:true}) io.sockets.on 'connection', (socket) -> console.log 'Socket.io is connected!' #This returns an array of documents sorted via date by decreasing order. (Most recent documents first.) socket.on 'loadRecentDocuments', -> console.log 'Loading most recent documents.' db.collection 'documents', (err, collection) -> collection.find().sort(dateAdded: -1).toArray (err, documents) -> #This emit is recieved at index.html where a javascript function sendDocuments manages the documents. socket.emit 'sendDocuments', documents return #The index.html provides the code data from the search box via a javascript. io.sockets.on 'findDocuments', (code) -> #Returns an array of documents with the corresponding class code. documentCodeToSearch = code console.log 'Retreaving documents with code: ' + documentCodeToSearch db.collection 'documents', (err, collection) -> collection.find(code:documentCodeToSearch).toArray (err, documents) -> socket.emit 'sendDocuments', documents return #Uploads a document to the server. documentData is sent via javascript from submit.html io.sockets.on 'addDocument', (documentData) -> console.log 'Adding document: ' + documentData db.collection 'documents', (err, collection) -> collection.insert documentData, safe: true return #Test socket.io io.sockets.on 'testEmit', -> console.log('Emit recieved.') socket.emit 'testEmit2', 'caca' return app.listen 1337 console.log "Listening on port 1337..." This is my client: <!doctype HTML> <html> <head> <title>ProjectShare</title> <script src="http://localhost:8080/socket.io/socket.io.js"></script> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script> //Make sure DOM is ready before mucking around. $(document).ready(function() { console.log('jQuery entered!'); var socket = io.connect('http://localhost:8080'); socket.emit('testEmit'); socket.on('testEmit2', function(data) { console.log('Emit recieved at browser.'); console.log(data); }); console.log('jQuery exit.'); }); </script> </head> <body> <ol> <li><a href="index.html">ProjectShare</a></li> <li><a href="guidelines.html">Guidelines</a></li> <li><a href="upload.html">Upload</a></li> <li> <form> <input type = "search" placeholder = "enter class code"/> <input type = "submit" value = "Go"/> </form> </li> </ol> <ol id = "documentList"> </ol> </body> </html>

    Read the article

  • Java stored procedures in Oracle, a good idea?

    - by Scott A
    I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source). An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests). I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon). Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.

    Read the article

  • Turn-based Client-Server Card Game - Unicast (TCP) or Multicast (UDP)

    - by LDM91
    I am currently planning to make a card game project where the clients will communicate with the server in a turn-based and synchronous manner using messages sent over sockets. The problem I have is how to handle the following scenario: (Client takes it turn and sends its action to server) Client sends a message telling the server its move for the turn (e.g. plays the card 5 from its hand which needs to placed onto the table) Server receives messages and updates game state (server will hold all game state). Server iterates through a list of connected clients and sends a message to tell of them change in state Clients all refresh to display the state This is all based on using TCP, and looking at it now it seems a bit like the Observer pattern. The reason this seems to be an issue to me is this message doesn't seem to be point-to-point like the others as I want to send it to all the clients, and doesn't seem very efficient sending the same message in that way. I was thinking about using multicasting with UDP as then I could send the message to all the clients, however wouldn't this mean that the clients would in theory be able to message each other? There is of course the synchronous aspect as well, though this could be put on top of the UDP I guess. Basically, I would like to know what would be good practice as this project is really all about learning, and even though it won't be big enough to encounter performance issues from this I would like to consider them anyway. However, please note I am not interested in using message oriented middleware as a solution (I have experience with using MOM and I'm interested in considering other options excluding MOM if TCP sockets is a bad idea!).

    Read the article

  • Warning flagged by the 'rkhunter'

    - by gkt.pro
    when I scanned my Ubuntu 10.04 with rkhunter a root kit hunter toolkit, it gave following warning: Is there something that I have to worry about. [23:06:19] /usr/sbin/adduser [ Warning ] [23:06:19] Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable [23:06:20] /usr/sbin/rsyslogd [ Warning ] [23:06:20] Warning: The file properties have changed: [23:06:22] /usr/bin/dpkg [ Warning ] [23:06:22] Warning: The file properties have changed: [23:06:22] /usr/bin/dpkg-query [ Warning ] [23:06:22] Warning: The file properties have changed: [23:06:24] /usr/bin/ldd [ Warning ] [23:06:24] Warning: The file properties have changed: [23:06:24] Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable [23:06:24] /usr/bin/logger [ Warning ] [23:06:24] Warning: The file properties have changed: [23:06:25] /usr/bin/mail [ Warning ] [23:06:25] Warning: The file '/usr/bin/mail' exists on the system, but it is not present in the rkhunter.dat file. [23:06:27] /usr/bin/sudo [ Warning ] [23:06:27] Warning: The file properties have changed: [23:06:29] /usr/bin/whereis [ Warning ] [23:06:29] Warning: The file properties have changed: [23:06:29] /usr/bin/lwp-request [ Warning ] [23:06:29] Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: a /usr/bin/perl -w script text executable [23:06:29] /usr/bin/bsd-mailx [ Warning ] [23:06:29] Warning: The file '/usr/bin/bsd-mailx' exists on the system, but it is not present in the rkhunter.dat file. [23:06:30] /sbin/fsck [ Warning ] [23:06:30] Warning: The file properties have changed: [23:06:30] /sbin/ifdown [ Warning ] [23:06:30] Warning: The file properties have changed: [23:06:31] /sbin/ifup [ Warning ] [23:06:31] Warning: The file properties have changed: [23:06:34] /bin/dmesg [ Warning ] [23:06:34] Warning: The file properties have changed: [23:06:35] /bin/more [ Warning ] [23:06:35] Warning: The file properties have changed: [23:06:36] /bin/mount [ Warning ] [23:06:36] Warning: The file properties have changed: [23:06:37] /bin/which [ Warning ] [23:06:37] Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable [23:08:58] Checking /dev for suspicious file types [ Warning ] [23:08:58] Warning: Suspicious file types found in /dev: [23:08:58] Checking for hidden files and directories [ Warning ] [23:08:58] Warning: Hidden directory found: /etc/.java [23:08:58] Warning: Hidden directory found: /dev/.udev [23:08:58] Warning: Hidden directory found: /dev/.initramfs [23:09:01] Checking version of Exim MTA [ Warning ] [23:09:01] Warning: Application 'exim', version '4.71', is out of date, and possibly a security risk. [23:09:01] Checking version of GnuPG [ Warning ] [23:09:01] Warning: Application 'gpg', version '1.4.10', is out of date, and possibly a security risk. [23:09:01] Checking version of OpenSSL [ Warning ] [23:09:01] Warning: Application 'openssl', version '0.9.8k', is out of date, and possibly a security risk.

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Rails + Nginx + Unicorn multiple apps

    - by Mikhail Nikalyukin
    I get the server where is currently installed two apps and i need to add another one, here is my configs. nginx.conf user www-data www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Disable unknown domains ## server { listen 80 default; server_name _; return 444; } ## # Virtual Host Configs ## include /home/ruby/apps/*/shared/config/nginx.conf; } unicorn.rb deploy_to = "/home/ruby/apps/staging.domain.com" rails_root = "#{deploy_to}/current" pid_file = "#{deploy_to}/shared/pids/unicorn.pid" socket_file= "#{deploy_to}/shared/sockets/.sock" log_file = "#{rails_root}/log/unicorn.log" err_log = "#{rails_root}/log/unicorn_error.log" old_pid = pid_file + '.oldbin' timeout 30 worker_processes 10 # ????? ???? ? ??????????? ?? ????????, ???????? ??????? ? ??????? ???? ???? listen socket_file, :backlog => 1024 pid pid_file stderr_path err_log stdout_path log_file preload_app true GC.copy_on_write_friendly = true if GC.respond_to?(:copy_on_write_friendly=) before_exec do |server| ENV["BUNDLE_GEMFILE"] = "#{rails_root}/Gemfile" end before_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end Also im added capistrano to the project deploy.rb # encoding: utf-8 require 'capistrano/ext/multistage' require 'rvm/capistrano' require 'bundler/capistrano' set :stages, %w(staging production) set :default_stage, "staging" default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:forward_agent] = true set :scm, "git" set :user, "ruby" set :runner, "ruby" set :use_sudo, false set :deploy_via, :remote_cache set :rvm_ruby_string, '1.9.2' # Create uploads directory and link task :configure, :roles => :app do run "cp #{shared_path}/config/database.yml #{release_path}/config/database.yml" # run "ln -s #{shared_path}/db/sphinx #{release_path}/db/sphinx" # run "ln -s #{shared_path}/config/unicorn.rb #{release_path}/config/unicorn.rb" end namespace :deploy do task :restart do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -s USR2 `cat #{unicorn_pid}`; else cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D; fi" end task :start do run "cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D" end task :stop do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -QUIT `cat #{unicorn_pid}`; fi" end end before 'deploy:finalize_update', 'configure' after "deploy:update", "deploy:migrate", "deploy:cleanup" require './config/boot' nginx.conf in app shared path upstream staging_whotracker { server unix:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock; } server { listen 209.105.242.45; server_name beta.whotracker.com; rewrite ^/(.*) http://www.beta.whotracker.com/$1 permanent; } server { listen 209.105.242.45; server_name www.beta.hotracker.com; root /home/ruby/apps/staging.whotracker.com/current/public; location ~ ^/sitemaps/ { root /home/ruby/apps/staging.whotracker.com/current/system; if (!-f $request_filename) { break; } if (-f $request_filename) { expires -1; break; } } # cache static files :P location ~ ^/(images|javascripts|stylesheets)/ { root /home/ruby/apps/staging.whotracker.com/current/public; if ($query_string ~* "^[0-9a-zA-Z]{40}$") { expires max; break; } if (!-f $request_filename) { break; } } if ( -f /home/ruby/apps/staging.whotracker.com/shared/offline ) { return 503; } location /blog { index index.php index.html index.htm; try_files $uri $uri/ /blog/index.php?q=$uri; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location / { proxy_set_header HTTP_REFERER $http_referer; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; # If the file exists as a static file serve it directly without # running all the other rewite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://staging_whotracker; break; } } error_page 502 =503 @maintenance; error_page 500 504 /500.html; error_page 503 @maintenance; location @maintenance { rewrite ^(.*)$ /503.html break; } } unicorn.log executing ["/home/ruby/apps/staging.whotracker.com/shared/bundle/ruby/1.9.1/bin/unicorn_rails", "-c", "/home/ruby/apps/staging.whotracker.com/current/config/unicorn.rb", "-E", "staging", "-D", {5=>#<Kgio::UNIXServer:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock>}] (in /home/ruby/apps/staging.whotracker.com/releases/20120517114413) I, [2012-05-17T06:43:48.111717 #14636] INFO -- : inherited addr=/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock fd=5 I, [2012-05-17T06:43:48.111938 #14636] INFO -- : Refreshing Gem list worker=0 ready ... master process ready ... reaped #<Process::Status: pid 2590 exit 0> worker=6 ... master complete Deploy goes successfully, but when i try to access beta.whotracker.com or ip-address i get SERVER NOT FOUND error, while others app works great. Nothing shows up in error logs. Can you please point me where is my fault?

    Read the article

  • Is there a Windows equivalent for eventfd?

    - by Leaurus
    I am writing a cross-platform library which emulates sockets behaviour, having additional functionality in the between (App-mylib-sockets). I want it to be the most transparent possible for the programmer, so primitives like select and poll must work with accordingly with this lib. The problem is when data becomes available (for instance) in the real socket, it will have to go through a lot of processing, so if select points to the real socket fd, app will be blocked a lot of time. I want the select/poll to unblock only when data is ready to be consumed (after my lib has done all the processing). So I came across this eventfd which allows me to do exactly what I want, i.e. to manipule select/poll behaviour on a given fd. Since I am much more familiarized with Linux environment, I don't know what is the windows equivalent of eventfd. Tried to search but got no luck. Anyone can help me, please?

    Read the article

  • Can't Deploy or Upload Large SSRS 2008 Report from VS or IE

    - by Bratch
    So far in this project I have two reports in VS2008/BIDS. The first one contains 1 tablix and is about 100k. The second one contains 3 tablixes (tablices?) and is about 257k. I can successfully deploy the smaller report from VS and I can upload it from the Report Manager in IE. I can view/run it from Report Manager and I can get to the Report Server (web service) URL from my browser just fine. Everything is done over HTTPS and there is nothing wrong with the certificates. With the larger report, the error I get in VS is "The operation has timed out" after about 100 seconds. The error when I upload from IE is "The underlying connection was closed: An unexpected error occurred on a send" after about 130 seconds. In the RSReportServer.config file I tried changing Authentication/EnableAuthPersistence from true to false and restarting the service, but still get the error. I have the key "SecureConnectionLevel" set to 2. Changing this to 0 and turning off SSL is not going to be an option. I added a registry key named "MaxRequestBytes" to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters and set it to 5242880 (5MB) and restarted the HTTP and SRS services as suggested in a forum post by Jin Chen of MSFT. I still cannot upload the larger report. This is on MS SQL 2008 and WS 2003. Below is part of a log file entry from ...\Reporting Services\LogFiles when I attempted to upload from IE. library!WindowsService_0!89c!02/10/2010-07:57:57:: i INFO: Call to CleanBatch() ends ui!ReportManager_0-1!438!02/10/2010-07:59:33:: e ERROR: The underlying connection was closed: An unexpected error occurred on a send. ui!ReportManager_0-1!438!02/10/2010-07:59:34:: e ERROR: HTTP status code -- 500 -------Details-------- System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. --- System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. --- System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] buffers, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) --- End of inner exception stack trace --- ...

    Read the article

  • Max TCP Connections to a machine

    - by A9S6
    I am creating a Windows Service in .NET to which N number of client can connect. The service starts a TCP listener and accepts the client connections. The problem I am facing is that I can only open 10 connections to this service. The listener::AcceptTcpClient() method accepts only 10 connection and throws an exception for 11th one. The client application uses the System.Net.Sockets.TcpClient class and the service is using System.Net.Sockets.TcpListener class. This is the exception that I am getting when I try to make a number of connections in a for loop to this service (after the 10th connection is made): "Unable to read data from Transport connection: An exsting connection was forcibly closed by remote host"

    Read the article

  • C++ wrapper for posix and linux specific functions

    - by Libzajc
    Hi Do you know about any good library wrapping posix and linux functions and structures ( eg. sockets or file descriptors ) into C++ classes? For example I'm thinking about a base FileDescriptor class and some inheriting classes ( unix sockets etc ) with methods like write, read or even some syscalls ( sendfile, splice ) - all throwing exceptions instead of setting errno. Or some shared memory class etc. I can't seem to find anything like that and by now I consider writing it myself, as I often have to write a C++ app for linux and either use C functions ( painful error checking ), or wrap them myself every time.

    Read the article

  • Is is possible to use IOCP (or other API) in Reactor-style operations?

    - by Artyom
    Hello, Is there any scalable Win32 API (like IOCP not like select) that gives you reactor style operations on sockets? AFAIK IOCP allows you to receive notification on completed operations like data read or written (proactor) but I'm looking for reactor style of operations: I need to get notification when the socket is readable or writable (reactor). Something similar to epoll, kqueue, /dev/poll ? Is there such API in Win32? If so where can I find a manual on it? Clearification: I need select like api for sockets that is as scalable as IOCP, or I'm looking for a way to use IOCP in reactor like operations.

    Read the article

  • Polling duplex does not scale... what's the alternative?

    - by user80855
    Our tests showed that the polling duplex binding simply does not scale and can not be used on a service within a web-farm or even a web garden. We have looked at TCP/IP sockets for a client push method, but the firewall issue is does allow us to use sockets. I was wondering what is the alternative "free" solution to this problem? allowing us to scale and allowing us to push data to client... I have also tried the solution in this article http://tomasz.janczuk.org/2009/09/scale-out-of-silverlight-http-polling.html but at the end, there was too much polling on a database, and performance was affected. Our Silverlight application need a pub/sub design, but it needs to be reliable and scalable... any ideas?

    Read the article

  • IPC: Communication between Qt4 and MONO processes (on linux)

    - by elcuco
    I have to connect a Qt4 application to a mono Application. The current proof of concept uses network sockets (which is nice, I can debug using nc on the command line). But I am open to new suggestions. What are my alternatives? Edit: The original application stack is split into two parts: server + client. The client is supposed to show pictures and videos. Since we found that this is not possible in a sane way in Mono, we split the client into two parts: server - client - GUI In the original implementation the client+GUI were the same application. Now client is in C# (running on Mono), and the GUI is Qt4. Rewriting the client in Qt4 is not an option. Right now the communication between the client and the GUI is been done using TCP sockets through localhost. I am looking for better implementations.

    Read the article

  • Multi-Threading - Cleanup strategy at program end

    - by weismat
    What is the best way to finish a multi-threaded application in a clean way? I am starting several socket connections from the main thread in seperate sockets and wait until the end of my business day in the main thread and use currently System.Environment.Exit(0) to terminate it. This leads to an unhandled execption in one of the childs. Should I stop the threads from the list? I have been reluctant to implement any real stopping in the childs yet, thus I am wondering about the best practice. The sockets are all wrapped nicely with proper destructors for logging out and closing, but it still leads to errors.

    Read the article

  • Socket Select with empty fd set

    - by whoi
    Hi; Suppose you have an fd set which can have zero or more sockets in it. When I try to call select operation on empty fd set, what I get is -1 as the number of fds which are set, meaning error. So what would you suggest to overcome this problem, you might say do not call if empty but I have a loop and any time fd set can hold 0 or more sockets. What is the best approach about this problem? (we are on C programming language)

    Read the article

  • GAE and Socket Data

    - by Vinod
    I have a field device which keeps on sending data over to any designated port using sockets. I am planning to use GAE for server-side infrastructure. I read GAE does not support sockets. But i can configure the device to send the data over port 80. so we wrote a genericservlet to capture this data on GAE. But it is not obtaining any values from the client. any suggestions to fix this issue?

    Read the article

  • Do Any Client-Side JavaScript Frameworks Integrate Well With Node.js+Express.js+socket.io.js?

    - by Tom D
    I'm building an webapp using Node.js+Express.js+socket.io.js on the backend. Do any of the popular front-end frameworks (Agility, Angular, Backbone, Closure, Dojo, Ember, GWT, jQuery, Knockback, Knockout, Spine, YUI, etc) integrate well with this backend for "real-time" applications? I want my application to have a very "real-time" feel. Specifically, when a user submits a form I want the information to be sent using web sockets to the backend for validation and (if validation passes) to be updated in the database. Then, the server-side will use web sockets to send a confirmation that the data was saved or some list of errors. I will use the server's response to update the page using JavaScript. I know all this can be done with any of the listed frameworks. I'm interested in features of particular frameworks that will help the framework integrate better with the Node-based backend than the other frameworks.

    Read the article

  • Can a client determine whether the server has accept()'d a unix socket?

    - by Havoc P
    I'm dealing with a buggy server that will sometimes fail to accept() connections (but leaves its listening socket open). This is on Linux with unix domain sockets. Currently the only way to detect this is that after sending a bunch of data, the buffer fills up and blocks, and the server isn't sending any replies. This long-after-the-fact failure mode is hard to distinguish from other bugs - the server could be unresponsive for other reasons. Especially for unix domain sockets it seems the kernel should know whether accept() has occurred; is there any way to find this out? Can the client block until accept() happens somehow, or at least check whether it has? This is just for debugging purposes so it can be a little ugly.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >