Search Results

Search found 5662 results on 227 pages for 'processor socket'.

Page 24/227 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Setting processor affinity on CSC.exe launched by CoreCompile MSBuild Task

    - by Hardy
    I am wondering if there is simple way to ensure that when a c# project is compiled the CSC.exe launched inherits the parent processor affinity settings, or perhaps of a way where by i can supply this. I have been trying to accomplish this by launching a bat file from vs.net cmd prompt like start /affinity 01 custombuild.cmd and inside my custombuild.cmd i have @echo off msbuild Libraries.sln /t:rebuild /p:Configuration=Release;platform=x64 /m:1 :END The command line call to Csc.exe this generates looks like the following C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. What i 'd like to see is the CSC.exe to inherit the processor affinity or a simple way to be able to override how csc.exe call is generated so i can make it into a start /affinity 01 C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. I also noticed that CoreCompile target is defined in Microsoft.CSharp.targets, should i be considering overriding MSBuildToolsPath variable so i can sneak in my own version. This feels rather hacky. Any help would be much appreciated.

    Read the article

  • Receiving Multicast Messages on a Multihomed Windows PC

    - by Basti
    I'm developing a diagnostic tool on a PC with several Network Interfaces based on multicast/udp. The user can select a NIC, the application creates sockets, binds them to this NIC and adds them to the specific multicast group. The sending of multicast messages works fine. However receiving of messages only succeeds if I bind the sockets to a specific NIC of my PC. It almost looks like as there is a 'default' NIC for receiving multicast messages in Windows which is always the first NIC returned by the GetAdapterInfo function. I monitored the network with Wireshark and discovered that the "IGMP Join Group" message isn't sent from the NIC I bound the socket at, but by this 'default' NIC. If I disable this NIC (or remove the network cable), the next NIC of the list returned by GetAdapterInfo is used for receiving multicast messages. I was successful to change this 'default' NIC by adding an additional entry to the routing table of my PC, but I don't think this is a good solution of the problem. The problem also occurs with the code appended below. The join group messages isn't sent via 192.168.52 but via a different NIC. // socket_tst.cpp : Defines the entry point for the console application. // #include tchar.h #include winsock2.h #include ws2ipdef.h #include IpHlpApi.h #include IpTypes.h #include stdio.h int _tmain(int argc, _TCHAR* argv[]) { WSADATA m_wsaData; SOCKET m_socket; sockaddr_in m_sockAdr; UINT16 m_port = 319; u_long m_interfaceAdr = inet_addr("192.168.1.52"); u_long m_multicastAdr = inet_addr("224.0.0.107"); int returnValue = WSAStartup(MAKEWORD(2,2), &m_wsaData); if (returnValue != S_OK) { return returnValue; } // Create sockets if (INVALID_SOCKET == (m_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) ) { return WSAGetLastError(); } int doreuseaddress = TRUE; if (setsockopt(m_socket,SOL_SOCKET,SO_REUSEADDR,(char*) &doreuseaddress,sizeof(doreuseaddress)) == SOCKET_ERROR) { return WSAGetLastError(); } // Configure socket addresses memset(&m_sockAdr,0,sizeof(m_sockAdr)); m_sockAdr.sin_family = AF_INET; m_sockAdr.sin_port = htons(m_port); m_sockAdr.sin_addr.s_addr = m_interfaceAdr; //bind sockets if ( bind( m_socket, (SOCKADDR*) &m_sockAdr, sizeof(m_sockAdr) ) == SOCKET_ERROR ) { return WSAGetLastError(); } // join multicast struct ip_mreq_source imr; memset(&imr,0,sizeof(imr)); imr.imr_multiaddr.s_addr = m_multicastAdr; // address of multicastgroup imr.imr_sourceaddr.s_addr = 0; // sourceaddress (not used) imr.imr_interface.s_addr = m_interfaceAdr; // interface address /* first join multicast group, then registerer selected interface as * multicast sending interface */ if( setsockopt( m_socket ,IPPROTO_IP ,IP_ADD_MEMBERSHIP ,(char*) &imr , sizeof(imr)) == SOCKET_ERROR) { return SOCKET_ERROR; } else { if( setsockopt(m_socket ,IPPROTO_IP ,IP_MULTICAST_IF ,(CHAR*)&imr.imr_interface.s_addr ,sizeof(&imr.imr_interface.s_addr)) == SOCKET_ERROR ) { return SOCKET_ERROR; } } printf("receiving msgs...\n"); while(1) { // get inputbuffer from socket int sock_return = SOCKET_ERROR; sockaddr_in socketAddress; char buffer[1500]; int addressLength = sizeof(socketAddress); sock_return = recvfrom(m_socket, (char*) &buffer, 1500, 0, (SOCKADDR*)&socketAddress, &addressLength ); if( sock_return == SOCKET_ERROR) { int wsa_error = WSAGetLastError(); return wsa_error; } else { printf("got message!\n"); } } return 0; } Thanks four your help!

    Read the article

  • SQL Server 2005: Internal Query Processor Error:

    - by Geetha
    I am trying to execute this following procedure in SQL Server 2005. I was able to execute this in my development server and when i tried to use this in the Live Server I am getting an Error "Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services". am using the same Database and the same format. when we searched in the web it shows some fixes to be used in sql server 2005 to avoid this error but my DBA has confirmed that all the patches are updated in our server. can anyone give me some clue on this. Query: create Procedure [dbo].[sample_Select] @ID as varchar(40) as Declare @Execstring as varchar(1000) set @Execstring = ' Declare @MID as varchar(40) Set @MID = '''+@ID+''' select * from ( select t1.field1, t1.field2 AS field2 , t1.field3 AS field3 , L.field1 AS field1 , L. field2 AS field2 from table1 AS t1 INNER JOIN MasterTable AS L ON L. field1 = t1. field2 where t1. field2 LIKE @MID ) as DataTable PIVOT ( Count(field2) FOR field3 IN (' Select @Execstring=@Execstring+ L.field2 +',' FROM MasterTable AS L inner join table1 AS t1 ON t1.field1= L.field2 Where t1.field2 LIKE @ID set @Execstring = stuff(@Execstring, len(@Execstring), 1, '') set @Execstring =@Execstring +')) as pivotTable' exec (@Execstring)

    Read the article

  • Socket(TCPIP) Unstable

    - by Lee Kwan Wee
    I have a setup of a SCPI server in a Win7 PC and have 2 other programs talking to it locally(127.0.0.1) over TCPIP socket 5025 and 5029. This worked well and stable in a fresh PC, but when we moved it into our production lines and the IT dept added their policies and stuff, it became unstable. The PC is connected to the production floor server but both of the programs are running locally in the PC. The connection tends to be disconnected when there is an idle period. And it takes 5-6times to refreshing the connection to get it back. I'm not a programmer myself, so I'm hoping to see if anyone here can help with some answers. Thank you very much!! Regards, KwanWee.

    Read the article

  • i5 vs. i7 processor dev laptop

    - by vector
    Greetings! I need to get a laptop for dev work ( mostly server side Java, NetBeans ) and wonder if anyone had a chance to use either the i5 or i7 based laptop? Is the i7 an overkill? ... or will the i5 handle it just fine? I'm thinking something from the HP line running Ubuntu. Thanks

    Read the article

  • How does ARM Cortex A8 compare with a modern x86 processor

    - by thomasrutter
    I was wondering how does a modern ARM chip based on ARM Cortex A8 compare, in clock-for-clock performance and capability, to a modern x86 chip such as a Core 2 Duo or Core i5? I realise due to the different instruction sets it'll depend heavily on what you're doing. To put it another way, rendering a web page in webkit on a 1GHz ARM Cortex A8 based chip should be about equivalent to doing in on a Core i5 at __ MHz? Update October 2013: Since I asked this question years ago it's become a lot more common, when reading about mobile devices, to see architecture-agnostic benchmarks that you can compare across platforms - for example, in-browser benchmarks like Sunspider in Webkit will run on just about anything and you see these in reviews all the time now. And there's things like Geekbench now.

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

  • Rails: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'

    - by misbehavens
    So I've got a Ruby on Rails application that I am trying to run (in development) on Snow Leopard. I've got it working on my Ubuntu computer, but now I need to get my Snow Leopard environment set up. Originally, I installed the MySQL 2.8.1 Ruby Gem and was running into this issue: uninitialized constant MysqlCompat::MysqlRes But thanks to this tutorial I was able to resolve it by running this command and installing a previous version of the Gem: export ARCHFLAGS="-arch i386 -arch x86_64" ;sudo gem install --no-rdoc --no-ri -v=2.7 mysql -- --with-mysql-dir=/usr/local/mysql --with-mysql-config=/usr/local/mysql/bin/mysql_config Now that I've resolved that issue, I'm running into a different error: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' This happens when I try to run rake db:migrate as well as when the server is running. How can I resolve this issue?

    Read the article

  • Autohotkey: clipboard enhancements don't work in Google Docs word processor

    - by Robert Mark Bram
    Came across this amazing AutoHotkey tip in an earlier question: Clipboard enhancements ; Append to clipboard (cut) ^+x:: clipboardBefore = %clipboard% Send ^x ClipWait, 2 clipboard = %clipboardBefore% %clipboard% return ; Append to clipboard (copy) ^+c:: clipboardBefore = %clipboard% Send ^c ClipWait, 2 clipboard = %clipboardBefore% %clipboard% return Source: most useful autohotkey scripts But the append copy and cut don't seem to work in Google Docs (word processing). Anyone know how they can be fixed? Rob :)

    Read the article

  • Several Server Errors (No database connect, can't create TCP/IP socket etc

    - by Tobias Baumeister
    My server stops taking requests on my website today. It works for some time, but then the server just stops working and throws several errors: 500 Internal Server Error Warning: mysql_connect(): Can't create TCP/IP socket (105) in [...] on line 7 Couldn't connect to database. Please try again. mysql_connect(): Host [...] is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' mod_fcgid: can't apply process slot for /var/www/cgi-bin/cgi_wrapper/cgi_wrapper (this is from Error Log) Any ideas what might cause it? operating system is Ubuntu

    Read the article

  • SNMP - Value of CPU processor load not reflecting reality

    - by Ovesh
    Trying to plot CPU load on my server, with the following hardware: ProLiant DL360p Gen8 (same behavior on ProLiant DL360 G7). The machine is running VMWare ESXi5.1 To create a CPU spike I run dd if=/dev/zero of=/dev/null, and I know the CPU is overloaded, because I can see a correlating spike in the graphs displayed on vCenter. However, running this snmpwalk: snmpwalk -v 1 -c ******** 192.168.MY_IP 1.3.6.1.2.1.25.3.3.1.2 Shows the following results: iso.3.6.1.2.1.25.3.3.1.2.1 = INTEGER: 3 iso.3.6.1.2.1.25.3.3.1.2.2 = INTEGER: 2 iso.3.6.1.2.1.25.3.3.1.2.3 = INTEGER: 2 iso.3.6.1.2.1.25.3.3.1.2.4 = INTEGER: 3 Am I not looking into the right MIB? Should I be multiplying these by a constant? By the way, using HP Agentless Monitoring I was able to get some cpu stats, but not what I'm looking for, at least nothing I could find wading through these MIBs.

    Read the article

  • Processor always at max speed

    - by Pratyush Nalam
    I am running Windows 8 Pro on an Apple MacBook Pro 9,1 Mid 2012 15 inch non retina. It has a Core i7-3720 QM CPU @ 2.6 GHz. For the past few days, I have noticed that it is constantly running at max speed which is 2.59 GHz. Before, it used to run at 1.5 - 1.8 GHz on normal usage. And, the weirder thing is that CPU usage is minimal. Screenshot: So, what is the reason for this? And is it harmful?

    Read the article

  • BT socket Device over Structured Cabling

    - by TheD
    Not sure if this is the right stack* site to post on but I believe cabling is a just subject. Essentially I have a Credit Card machine which is connected to a phone line via standard BT Socket. So basically, the port of the wall has a balun plugged in which takes the RJ45 outlet on the wall to BT, so I can plug the device in. This works fine, however I need the machine to be on the other side of the room so want to route it through my patch panel (strucutered cabling). How can this be done? So Device -- ? -- RJ45 port -- Patch Panel -- ? -- Balun out wall outlet Where ? is to be filled in ! :)

    Read the article

  • Unix domain socket firewall

    - by lagab
    Hello, everyone. I've got a problem with my debian server. Probably there is some vulnerable script at my web-serser, which is running from www-data user. I also have samba with winbind installed, and samba is joined to windows domain. So, probably this vulnerable script allows hacker to bruteforce out domain controller through winbind unix domain socket. Actually I have lots of such lines at netstat -a output: unix 3 [ ] STREAM CONNECTED 509027 /var/run/samba/winbindd_privileged/pipe And our DC logs contain lots of recorded authentication attems from root or guest accounts. How can I restrict my apaches access to winbind? I had an idea to use some kind of firewall for IPC sockets. Is it possible?

    Read the article

  • 912 stream processor available in OpenCL

    - by tugrul büyükisik
    I am thinking of assembling this system: AMD CPU (A8-3870 APU which has Radeon HD 6550D inside: 400 stream processors:xxx GFLOPS) nearly 110$ AMD Graphics card: HD 7750 (512 stream processors:819 GFLOPS peak performance) nearly 170$ Appropriate ram (1600MHz bus) Mainboard What GFLOPS level can I reach as a stable mode with using OpenCL and similar programs? Can I use all 912 stream processors at the same time? I am not trying to do a VS question. I need to know what could be better for scientific computing (%75 of the time) and gaming (%25 of the time) because I have a low budget. With "scientific calculations" I mean fluid dynamics/solid state physics simulating; with games I mean those that need openCL and PhysX.

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • Amazon EC2 Socket connection not being accepted

    - by Joseph
    I am trying to run a java application on my EC2 instance. The application accepts socket connections on port 54321. If I try and connect to it, it times out. My Security Group is set as: TCP Port (Service) Source Action 21 0.0.0.0/0 Delete 22 (SSH) 0.0.0.0/0 Delete 80 (HTTP) 0.0.0.0/0 Delete 20393 0.0.0.0/0 Delete 54321 0.0.0.0/0 Delete Is there anything else I need to do? # iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination # iptables -nvL -t nat Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination #

    Read the article

  • how Infiniband speed is related to processor speed

    - by user223231
    I have two exactly the same servers and very curious how to make Infiniband interconnection between them? Both servers' basic specs are: CPU: 32GHz = 2x Intel Xeon X5650, 6 core, 2.66GHz and RAM: 24GB per server (edited) How determine what speed of Infiniband will be enough for perfect interconnection? SDR, DDR, QDR or FDR? My logic is 32Ghz = 32Gb/s and 40Gb one is enough, am I right or it is not that simple?

    Read the article

  • whats the name of this pattern?

    - by Wes
    I see this a lot in frameworks. You have a master class which other classes register with. The master class then decides which of the registered classes to delegate the request to. An example based passed in class may be something this. public interface Processor { public boolean canHandle(Object objectToHandle); public void handle(Object objectToHandle); } public class EvenNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isEven(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } public class OddNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isOdd(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } //Can optionally implement processor interface public class processorDelegator { private List processors; public void addProcessor(Processor processor) { processors.add(processor); } public void process(Object objectToProcess) { //Lookup relevant processor either by keeping a list of what they can process //Or query each one to see if it can process the object. chosenProcessor=chooseProcessor(objectToProcess); chosenProcessor.handle(objectToProcess); } } Note there are a few variations I see on this. In one variation the sub classes provide a list of things they can process which the ProcessorDelegator understands. The other variation which is listed above in fake code is where each is queried in turn. This is similar to chain of command but I don't think its the same as chain of command means that the processor needs to pass to other processors. The other variation is where the ProcessorDelegator itself implements the interface which means you can get trees of ProcessorDelegators which specialise further. In the above example you could have a numeric processor delegator which delegates to an even/odd processor and a string processordelegator which delegates to different strings. My question is does this pattern have a name.

    Read the article

  • Modifying PROCTHROTTLEMAX with powercfg has no effect in 2008 R2

    - by AlexC
    I am trying to make the CPU transition to a lower P-state. I used pwrtest to determine the tests, and now I want to set the processor frequency to 50%. I executed the following command: powercfg -setacvalue SCHEME_BALANCED SUB_PROCESSOR PROCTHROTTLEMAX 50 When i query the scheme, the value is set to the desired value. However, the processor frequency is not modified (I am using CPU-Z to check the frequency). My system is running Windows 2008 R2. Any ideas? Thanks!

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >