Search Results

Search found 45843 results on 1834 pages for 'network access'.

Page 269/1834 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • PHP access class inside another class

    - by arxanas
    So I have two classes like this: class foo { /* code here */ } $foo = new foo(); class bar { global $foo; public function bar () { echo $foo->something(); } } I want to access the methods of foo inside all methods bar, without declaring it in each method inside bar, like this: class bar { public function bar () { global $foo; echo $foo->something(); } public function barMethod () { global $foo; echo $foo->somethingElse(); } /* etc */ } I don't want to extend it, either. I tried using the var keyword, but it didn't seem to work. What do I do in order to access the other class "foo" inside all methods of bar?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Windows 7 VPN wont allow FTP, route FTP traffic through local network

    - by Rolf Herbert
    I use a VPN on my windows 7 PC for privacy and currently route all my traffic through the VPN. This arrangement is fine and its plenty fast. Unfortunately the VPN does not allow any FTP traffic so when I am updating websites I have to disconnect the VPN and work through my local connection. This is annoying and cumbersome. I have read a little about split tunnelling but this is not quite what I need, and it often talks about 'internet' traffic which is not specific to certain IPs or ports. Is it possible to route traffic on certain ports through the local connection, or is it possible to route traffic on certain IPs through the local connection using stuff built into windows 7..? Thanks

    Read the article

  • Configuring Wireless Network

    - by Vinod K
    I have vyataa router on VMware with 2 interfaces eth0 and eth1 eth0 is facing the internet eth0 is in Nat mode with dhcp on eth1 is in bridged mode with my ethernet with ip 10.0.2.34/24 The ethernet card is at ip 10.0.2.95/24 i have defined the nat rule. Hence internet is available at eth1 too. Now i am connecting a wireless router at "eth1" iball router, I have connected the router using a cable to the ethernet interface of my laptop. I have configured the WAN connection type as "Static IP" and given "10.0.2.34/24" All the clients that connect using wireless router cannot connect to the internet though. Could anyone provide me a solution for this.. Thank You!!

    Read the article

  • My laptop can connect to every wireless network except fios

    - by going crazy
    I have always been able to connect to every wireless router secured or unsecured wep or wpa. I had Fios installed and could not connect. Verizon suggested it was my computer and gave me an outside wirless drive to use, it worked. I got rid of fios and went back to comcast and threw out the drive, but now 2 years later, I am sitting at my friends house haveing the same problem. My tech savy friend told me it is a firewall setting or something in my antivirus software, but I disabled them both and still nothing works........Funny it is only FIOS

    Read the article

  • Time issues on the Network -- How to find the Root Cause

    - by Jeff
    A number of application servers started erroring out in my domain. Troubleshooting led me to a missconfiguration of NTP. I fixed the issue, but I don't know how the issue arose in the first place. The only errors I can find are System Error: 1097 Source: useenv System Error: 1058 Source: useenv System Error: 1030 Source: useenv System Error: 1000 Source: mmc How else can I find out why NTP started acting up on my domain? Is there any troubleshooting steps to diagnose why my DC started pulling from a random timeserver with the wrong time? EDIT: Current issue actually remains: the two 2003 DCs are not syncing with the PDC (a 2k8 box). w32tm /resync -- The computer did not resync because no time data was available.

    Read the article

  • compressed archive with quick access to individual file

    - by eric.frederich
    I need to come up with a file format for new application I am writing. This file will need to hold a bunch other text files which are mostly text but can be other formats as well. Naturally, a compressed tar file seems to fit the bill. The problem is that I want to be able to retrieve some data from the file very quickly and getting just a particular file from a tar.gz file seems to take longer than it should. I am assumeing that this is because it has to decompress the entire file even though I just want one. When I have just a regular uncompressed tar file I can get that data real quick. Lets say the file I need quickly is called data.dat For example the command... tar -x data.dat -zf myfile.tar.gz ... is what takes a lot longer than I'd like. MP3 files have id3 data and jpeg files have exif data that can be read in quickly without opening the entire file. I would like my data.dat file to be available in a similar way. I was thinking that I could leave it uncompressed and seperate from the rest of the files in myfile.tar.gz I could then create a tar file of data.dat and myfile.tar.gz and then hopefully that data would be able to be retrieved faster because it is at the head of outer tar file and is uncompressed. Does this sound right?... putting a compressed tar inside of a tar file? Basically, my need is to have an archive type of file with quick access to one particular file. Tar does this just fine, but I'd also like to have that data compressed and as soon as I do that, I no longer have quick access. Are there other archive formats that will give me that quick access I need? As a side note, this application will be written in Python. If the solution calls for a re-invention of the wheel with my own binary format I am familiar with C and would have no problem writing the Python module in C. Idealy I'd just use tar, dd, cat, gzip, etc though. Thanks, ~Eric

    Read the article

  • Intermittent USB 3.0 access - How do I troubleshoot?

    - by Billy ONeal
    I've got a WD Passport hard disk with "superspeed" USB 3.0 support. When I use my USB 3.0 flash drive (this is a Lenovo X220 laptop), USB 3.0 consistently works. But when I use the passport drive, almost without fail the connection drops to USB 2.0. Touching the cable seems to immediately trigger the problem, but it seems to happen on its own though. I've got another cable on order right now... but is it likely the cable's the issue here? Is there anything else I can check?

    Read the article

  • Filtering Client IP from Access Log for Urchin

    - by Ram Prasad
    I have some apache logs to process, and since the webserver behind two levels of reverse proxies, I am getting two IPs in the X-Forwarded-For header.. for example: 208.34.234.55, 127.0.0.1 - - [29/Oct/2009:21:38:13 -0500] "GET /monkey.html HTTP/1.0" 200 20845 0 0 "http://www.monkey.com/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729)" Now, how do I filter this in Urchin (or remove this in Apache logging) so, 127.0.0.1 is removed from processing. Currently urchin is not able to recognize the multuple IP address so it does not log the remote IP

    Read the article

  • search nfs network volume from mac client

    - by user1440190
    Asked: how does a Mac OSX SL or Lion user search the cluster for a particular file (foo.txt) "From the cluster, you would need to run some form of recursive lookup for the file desired. As an example, using 'find'. RAM-1# find /ifs |grep test.txt /ifs/Elements/avid2test.txt /ifs/Elements/test.txt I would suggest contacting Apple support regarding their recommendation for searching for files on remote file systems from the Mac client itself" OK that's great, but I don't want users using CLI !!! Anyone know a good method (non-CLI)? Spotlight is not an option. BTW cluster is roughly 80TB

    Read the article

  • Limiting and redirect port access with useragent

    - by linuxcore
    I'm trying to write iptables string match rule To block http://domain.com:8888 and https://domain.com:8888 when it matches the supplied string in the rule. And another rule to redirect the ports also from 8888 to 7777 I tried following rules but unfortunately didn't work iptables -A INPUT -p tcp -s 0.0.0.0/0 -m string --string linuxcore --algo bm --sport 8888 -j DROP iptables -t raw -A PREROUTING -m string --algo bm --string linuxcore -p tcp -i eth0 --dport 8888 -j DROP iptables -t nat -A PREROUTING -p tcp --dport 8888 -m string --algo bm --string "linuxcore" -j REDIRECT --to-port 7777 iptables -A INPUT -t nat -p tcp --dport 8888 -m string --algo bm --string "linuxcore" -j DROP I want to do this from iptables not the webserver because the server may not have a webserver and those ports are working on internal proxy or something like ..etc

    Read the article

  • nginx: rewrite URL but have original URL stored in access.log as 200

    - by mhambra
    I'm setting up a link tracking system, which (temporarily) involves adding /link/id/ in front of URL (like http://server/data/id/publication/id/). rewrite data/id/(.*) http://server/$1; The request is logged as: ip - - [17/Nov/2011:10:07:19 +0300] "GET /data/id/publication/id.html HTTP/1.1" 302 154 "-" "UA"` For some reason (keeping the compatibility with AWStats) it is wanted to have 200 logged instead of 302. (nginx allows to get 301 code out of box with permanent option, but thats inappropriate too) What are my options here? Will the combination of location { } and rewrite do the job?

    Read the article

  • Network load balancing, efficience and limits?

    - by Vimvq1987
    I'm about to study about NLB on Windows Server 2003. It archives both of my interests now: scalability and high-availability. But I don't know about its power in production environment. Is NLB a efficient solution? How does it implement in real-world? Is it popular? What are its limit? Thank you so much for answering my questions. :)

    Read the article

  • Software firewall used in network

    - by user45019
    Hi, I have a medium sized organization with users between 300-500 users. I am looking for software firewall for this type orgnization. Which type of software do you guys prefer, am not looking for hardware firewall...Can u suggest me some names of software firewall for this kind of organization. thanks, Gary

    Read the article

  • Google Docs revision/access control

    - by brainjam
    I've worked on shared Google Docs with family members, but don't really know how or whether Google prevents two users from modifying the same document at the same time .. and clobbering one another's work. How does Google Docs handle this .. is a document 'locked' whenever somebody opens it for revisions? I haven't been able to see an answer for this in their documentation/help.

    Read the article

  • Why is access to my database very slow?

    - by Fabien
    I have a mysql database that used to work perfectly fine, but now it is dead slow on startup. When I type in $> mysql -u foo bar I get the following usual message for about 30 seconds before I get a prompt : Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Of course, I tried it and it goes a lot faster : $> mysql -u foo bar -A But why do I have to wait so long in regular startup ? This is not a very big database, and data does not seem to be corrupted (everything looks fine after startup). I have no other client connecting to the mysql server at the same time (only one process is shown with the command show full processlist) and I have already restarted the mysqld service. What's going on ?

    Read the article

  • Run WMIC command across network

    - by C-dizzle
    Instead of typing this in a command prompt one at a time: wmic /node:ipaddress /user:administrator /password:mypassword bios get serialnumber How can I run that against one entire subnet and output to a text document? Since I do this every couple months to verify our inventory of computers, I would assume there would be a much of easier way I could put this in a batch script instead of doing it manually.

    Read the article

  • Finding Network Status of specific process name

    - by Moev4
    I am looking for the cleanest way on linux to find the port status for a port being used by a specified program name via the command line. I have seen that netstat -p lists all pids but haven't seen anything corresponding to specific process names. Any help would be appreciated.

    Read the article

  • gpasswd and access to a file or directory

    - by PeanutsMonkey
    As I understand it if I run the command gpasswd -A username directoryname I assign administrator privileges to username for the directory directoryname. This means that username is able to add new members to the group for directoryname without root privileges. Does this also mean that username belongs to the group or do I need to add username to the group using the commands usermod, gpasswd -a or gpasswd -M

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • Postfix sasl: Relay access Denied (state 14)

    - by Primoz
    I have postfix installed with dovecot. There are no problems when I'm trying to send e-mails from my server, however all e-mails that are coming in are rejected. My main.cf file: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix mail_owner = postfix inet_interfaces = all mydestination = localhost, $mydomain, /etc/postfix/domains/domains virtual_maps = hash:/etc/postfix/domains/addresses unknown_local_recipient_reject_code = 550 mynetworks = 127.0.0.0/8 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.3.3/samples readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_recipient_restrictions = check_policy_service inet:127.0.0.1:9999, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, smtpd_sender_restriction = reject_non_fqdn_sender broken_sasl_auth_clients = yes UPDATE: Now, when e-mail comes to the server, the server tries to reroute the mail. Example, if the message was sent to [email protected], my server changes that to [email protected] and then the mail bounces because there's no such domain on my server.

    Read the article

  • access models and forms within modules

    - by sims
    Hi Stackers, What is the best way to access my models and forms from a controller of a module? Let's explain with "pictures": /application/module/storage/controllers/IndexController.php needs to call readAction in the class called storage_Model_Files in /application/module/storage/models/Files.php I've made this app's dir structure and these forms and models with zf.sh (Zend_Tool). I've read about all sorts of ways of manually including these files. I want to lazy load them much like everything is done automatically with the default module. I can't seem to find how in the docs. Does that make sense? I have: resources.frontController.moduleDirectory = APPLICATION_PATH "/modules" in my application.ini file. So I can access my controllers fine. Thanks for your help!

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >