Search Results

Search found 39613 results on 1585 pages for 'large object heap'.

Page 64/1585 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Clipboard bug in Wordpad in Windows 7 (accidentally pasting large file into application)

    - by frenchglen
    In Win7, I use Wordpad, and I really like it. For my needs it's lean and fast, yet has the formatting functionalities I'm after when working on my TXT/RTF files on a daily basis. I don't intend to change text editors. There's a really bad bug which has ALWAYS plagued me. If you have a large file contained in the clipboard, like a 238MB FLAC file, and you accidentally paste it into Wordpad for whatever reason - it hangs the application for a VERY long time (like 2 hours, it depends on how big the file is, because it tries to 'handle' it). You either have to close the application and lose any unsaved changes, or go do something else until the item has finished pasting into Wordpad (it actually eventually drops the file's icon in wordpad just like how it appears in Windows Explorer). It's a Windows bug, a Wordpad bug. Is there some solution for this? Or is the problem fixed in Windows 8 (if anyone can tell me)? .....I'm not going to try out Win8 myself, merely to answer this question - that's what I'm asking it on SuperUSer for! I'm really hoping it's one of those little-yet-big things that they've fixed in Win8 (like removing the 255-character file path limit in Explorer, which is awesome). Thank you for your help, if you have Win8 handy and can test this. :)

    Read the article

  • running chkdsk /F on a large mounted NTFS image file gets BSOD (Windows Vista)

    - by Citizentools
    Using ddrescue, I've created ISO files from the C: and D: drives on my Windows XP laptop's harddisk (after the laptop stopped booting and chkdsk etc. wouldn't fix it). I was able to mount the 60 GB D.iso file use OSFmount, and successfully recreated the D: drive on another laptop. The C.iso image is more problematic. ddrescue left about 3mb unrecovered of 85 GB total, after multiple passes (no big worries about this) and I'm able to mount it with OSFmount on a Windows Vista laptop. However, when I run chkdsk /F /V on the mounted drive (which was mounted as H:), I consistently get a blue screen (BSOD). CHKDSK makes it through the first three passes, including index fixing and security descriptor fixes, without errors, but triggers a BSOD when it attempts to fix the volume records or bitmap If I attempt to fix the drive by clicking on Properties-Tools-Error checking-Check Now-Automatically fix file system errors, I get an alert box reading "WIndows was unable to complete the disk check." I'd try a tool other than OSFMount, but it's the only thing I've found so far that will mount large ISO files, and it has worked for me up to now in this process. [Update 2011-11-13 18:41 EST] Just ran the same process using the original Windows XP laptop, with a different internal drive, and chkdsk worked like a champ. So the question is still interesting, but decidedly less urgent.

    Read the article

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • ext3: maximum recommended partition size / handling large partitions

    - by Hansi
    Hi! I would like to do an encrypted install of Ubuntu on a 2 Terabyte drive (i.e., using LUKS/DMcrypt). In order to not have to type in passwords too often, the partitioning scheme will be 50 GB for / and about 1 TB for /home (and the rest for Windows 7), just for clarity. Even though by now LVM is regarded as being stable, I don't want to bother having more room for errors by introducing unnecessary layers of complexity. For both Ubuntu partitions I want encrypted ext3 with the default blocksize of ext3 (4k?). Thoughts: When I look at most partition schemes here on this site or elsewhere, I usually see at most about 400 or 500 GB partitions (maybe I didn't see enough). There may be different reasons for this, but is reliability an issue here? Are larger ext3 partitions, like about 1 TB, harder to handle for the OS or filesystem driver or at some other level? If I make the partition too large, will it be harder to repair in case of corruptions? Are there some default settings for ext3 that I should change for 1 TB partitions? Question: What maximum partition size for ext3 do you recommend and why? Thanks!

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Long 'pause' after copying large files on windows 2008

    - by Ian
    I have a mystery regarding pauses after file copies on windows server 2088 (and other releases) When copying large files, like vhds, to locally attached USB disks I often see a long pause after the copy has completed 100%. As an example: robocopying vhd files. The bytes read/written count matches the vhd file size and robocopy shows 100% but it pauses for several minutes. If I do nothing it will continue, but I will have to wait for quite some time - about the same amount of time as it took to get to 100%. The bytes read/bytes written counters for robocopy do not change. My first thought was that the AV had to scan it, but I'm looking at a machine right now which doesn't have an AV installed and this is occurring, so impossible. No other processes are showing read/write byte counts as going up. The behavior is the same if I use the copy command or xcopy. I've seen this on other systems but have never worked out what the cause is. Anyone got any suggestions as to what might be going on?

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

  • Potential impact of large broadcast domains

    - by john
    I recently switched jobs. By the time I left my last job our network was three years old and had been planned very well (in my opinion). Our address range was split down into a bunch of VLANs with the largest subnet a /22 range. It was textbook. The company I now work for has built up their network over about 20 years. It's quite large, reaches multiple sites, and has an eclectic mix of devices. This organisation only uses VLANs for very specific things. I only know of one usage of VLANs so far and that is the SAN which also crosses a site boundary. I'm not a network engineer, I'm a support technician. But occasionally I have to do some network traces for debugging problems and I'm astounded by the quantity of broadcast traffic I see. The largest network is a straight Class B network, so it uses a /16 mask. Of course if that were filled with devices the network would likely grind to a halt. I think there are probably 2000+ physical and virtual devices currently using that subnet, but it (mostly) seems to work. This practise seems to go against everything I've been taught. My question is: In your opinion and  From my perspective - What measurement of which metric would tell me that there is too much broadcast traffic bouncing about the network? And what are the tell-tale signs that you are perhaps treading on thin ice? The way I see it, there are more and more devices being added and that can only mean more broadcast traffic, so there must be a threshold. Would things just get slower and slower, or would the effects be more subtle than that?

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • How can I bend an object in OpenGL?

    - by mindnoise
    Is there a way one could bend an object, like a cylinder or a plane using OpenGL? I'm an OpenGL beginner (I'm using OpenGL ES 2.0, if that matters, although I suspect, math matters most in this case, so it's somehow version independent), I understand the basics: translate, rotate, matrix transformations, etc. I was wondering if there is a technique which allows you to actually change the geometry of your objects (in this case by bending them)? Any links, tutorials or other references are welcomed!

    Read the article

  • A small, intra-app Object to String Serializer

    - by Rick Strahl
    On a few occasions I've needed a very compact serializer for small and simple, flat object serialization, typically for storage in Cookies or a FormsAuthentication ticket in ASP.NET. XML and JSON serialization are too verbose for those scenarios so a simple property serializer that strings together the values was needed. Originally I did this by hand, but here is a class that automates the process.

    Read the article

  • NoSQL is not about object databases

    - by Bertrand Le Roy
    NoSQL as a movement is an interesting beast. I kinda like that it’s negatively defined (I happen to belong myself to at least one other such a-community). It’s not in its roots about proposing one specific new silver bullet to kill an old problem. it’s about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal answer -although they have been used as one for so long- and recognize instead that there’s a whole spectrum of data storage solutions out there. Why is it so hard to recognize, by the way? You are already using some of those other data storage solutions every day. Let me cite a few: The file system Active Directory XML / JSON documents The Web e-mail Logs Excel files EXIF blobs in your photos Relational databases And yes, object databases It’s just a fact of modern life. Notice by the way that most of the data that you use every day is unstructured and thus mostly unsuitable for relational storage. It really is more a matter of recognizing it: you are already doing NoSQL. So what happens when for any reason you need to simultaneously query two or more of these heterogeneous data stores? Well, you build an index of sorts combining them, and that’s what you query instead. Of course, there’s not much distance to travel from that to realizing that querying is better done when completely separated from storage. So why am I writing about this today? Well, that’s something I’ve been giving lots of thought, on and off, over the last ten years. When I built my first CMS all that time ago, one of the main problems my customers were facing was to manage and make sense of the mountain of unstructured data that was constituting most of their business. The central entity of that system was the file system because we were dealing with lots of Word documents, PDFs, OCR’d articles, photos and static web pages. We could have stored all that in SQL Server. It would have worked. Ew. I’m so glad we didn’t. Today, I’m working on Orchard (another CMS ;). It’s a pretty young project but already one of the questions we get the most is how to integrate existing data. One of the ideas I’ll be trying hard to sell to the rest of the team in the next few months is to completely split the querying from the storage. Not only does this provide great opportunities for performance optimizations, it gives you homogeneous access to heterogeneous and existing data sources. For free.

    Read the article

  • unstripped binary or object creating debian package

    - by Jaime
    I'm new to package creation and I'm trying to create a .deb package and keep getting 'unstripped-binary-or-object' on all my libraries and executables. I have everything setup in the directory tree where I want them to end up (and a DEBIAN folder with the control file) and then do fakeroot dpkg-deb --build ./mypackage when I lint with lintian mypackage.deb I get that error. Any ideas or suggestions? Thanks

    Read the article

  • Rotate/Translate object in local space

    - by Mathias Hölzl
    I am just trying to create a movementcontroller class for game entities. These class should transform the entity affected by the mouse and keyboard input. I am able to calculate the changed rotation and the new globalPosition. Then I multiply: newGlobalMatrix = changedRotationMatrix * oldGlobalMatrix; newGlobalMatrix = MatrixSetPosition(newPosition); The problem is that the object rotates around the global axis and not around the local axis. I use XNAMath for the matrix calculation.

    Read the article

  • ANN: ObjectDataSource and siaqodb -object database for .NET

    Siaqodb -object database for .NET, Mono and Silverlight version 1.1 (just released) fully support now ASP.NET 4.0 and by ObjectDataSource and ASP.NET Dynamic Data, building data-driven apps are faster then ever:http://siaqodb.com/?p=236...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to bind ArrayList Object to an ASPxGridView control

    - by nikolaosk
    I have been involved with a ASP.Net project recently and I have implemented it using the awesome DevExpress ASP.Net controls. Parts of the project involved binding data from custom objects, SqlDataSource & ObjectDataSource data sources to the ASPxGridView control. In this post I will show you how to bind data from an ArrayList object to the ASPxGridView control . If you want to implement this example you need to download the trial version of these controls unless you are a licensed holder of...(read more)

    Read the article

  • Software design for non object oriented paradigm

    - by Dean
    I'm currently working on a project where I'm writing the firmware for an electronic system in C, and have been asked to produce documentation on the development/evolution of the software for the embedded devices. Having developed software in the object oriented paradigm I know to use UML to document the software such as class diagrams with objects, however this does not work for documenting the development of my embedded system. So what should I produce to document the development of my firmware?

    Read the article

  • Too much delay while sending object over UDP to server

    - by RomZes
    I'm getting 4 sec delay when sending objects over UDP. Working on small game and trying to implement multiplayer. For now just trying to synchronize movements of 2 balls on the screen. StartingPoint.java is my server(first player), that receiving serialized objects (coordinates). SecondPlayer.java is client that sending serialized objects to server. When I'm moving my first object it appears 4 seconds later on different screen. StartingPoint.java @Override public void run() { byte[] receiveData = new byte[256]; byte[] sendData = new byte[256]; // DatagramSocket socketS; try { socket = new DatagramSocket(5000); System.out.println("Socket created on "+ port + " port"); } catch (SocketException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } while(true){ b1.update(this); b3.update(); System.out.println("Starting server..."); //// Receiving and deserializing object try { //socket.setSoTimeout(1000); DatagramPacket packet = new DatagramPacket(buf, buf.length); socket.receive(packet); byte[] data = packet.getData(); ByteArrayInputStream in = new ByteArrayInputStream(data); ObjectInputStream is = new ObjectInputStream(in); // socket.setSoTimeout(300); b1 = (Ball) is.readObject(); } catch (IOException | ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } SecondPlayer.java @Override public void run() { while(true){ b.update(); networkSend(); repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } } public void networkSend(){ // Serialize to a byte array try { ByteArrayOutputStream bStream = new ByteArrayOutputStream(); ObjectOutputStream oo; oo = new ObjectOutputStream(bStream); oo.writeObject(b); oo.flush(); oo.close(); byte[] bufCar = bStream.toByteArray(); //socket = new DatagramSocket(); //socket.setSoTimeout(1000); InetAddress address = InetAddress.getByName("localhost"); DatagramPacket packet = new DatagramPacket(bufCar, bufCar.length, address, port); socket.send(packet); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

  • 'module' object has no attribute 'element_make_factory'

    - by Ronan Dejhero
    i have this code : import pygst import st, pygtk player_name = gst.element_make_factory("playbin", "Multimedia Player") player_name.set_property("uri", "../media/alert.mp3") player_name.set_state(gst.PLAYING) it keeps throwing me the following error : player_name = gst.element_make_factory("playbin", "Multimedia Player") AttributeError: 'module' object has no attribute 'element_make_factory' nay way to solve this and why is this happening ? if i print gst i get the following : <module 'gst' from '/usr/lib/python2.7/dist-packages/gst-0.10/gst/__init__.pyc'> so it is a module !

    Read the article

  • rotating an object on an arc

    - by gardian06
    I am trying to get a turret to rotate on an arc, and have hit a wall. I have 8 possible starting orientations for the turrets, and want them to rotate on a 90 degree arc. I currently take the starting rotation of the turret, and then from that derive the positive, and negative boundary of the arc. because of engine restrictions (Unity) I have to do all of my tests against a value which is between [0,360], and due to numerical precision issues I can not test against specific values. I would like to write a general test without having to go in, and jury rig cases //my current test is: // member variables public float negBound; public float posBound; // found in Start() function (called immediately after construction) // eulerAngles.y is the the degree measure of the starting y rotation negBound = transform.eulerAngles.y-45; posBound = transform.eulerAngles.y+45; // insure that values are within bounds if(negBound<0){ negBound+=360; }else if(posBound>360){ posBound-=360; } // called from Update() when target not in firing line void Rotate(){ // controlls what direction if(transform.eulerAngles.y>posBound){ dir = -1; } else if(transform.eulerAngles.y < negBound){ dir = 1; } // rotate object } follows is a table of values for my different cases (please excuse my force formatting) read as base is the starting rotation of the turret, neg is the negative boundry, pos is the positive boundry, range is the acceptable range of values, and works is if it performs as expected with the current code. |base-|-neg-|-pos--|----------range-----------|-works-| |---0---|-315-|--45--|-315-0,0-45----------|----------| |--45--|---0---|--90--|-0-45,54-90----------|----x----| |-135-|---90--|-180-|-90-135,135-180---|----x----| |-180-|--135-|-225-|-135-180,180-225-|----x----| |-225-|--180-|-270-|-180-225,225-270-|----x----| |-270-|--225-|-315-|-225-270,270-315-|----------| |-315-|--270-|---0---|--270-315,315-0---|----------| I will need to do all tests from derived, or stored values, but can not figure out how to get all of my cases to work simultaneously. //I attempted to concatenate the 2 tests: if((transform.eulerAngles.y>posBound)&&(transform.eulerAngles.y < negBound)){ dir *= -1; } this caused only the first case to be successful // I attempted to store a opposite value, and do a void Rotate(){ // controlls what direction if((transform.eulerAngles.y > posBound)&&(transform.eulerAngles.y<oposite)){ dir = -1; } else if((transform.eulerAngles.y < negBound)&&(transform.eulerAngles.y>oposite)){ dir = 1; } // rotate object } this causes the opposite situation as indicated on the table. What am I missing here?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >