Search Results

Search found 15007 results on 601 pages for 'array'.

Page 494/601 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • How to implement a multi-part snake with smooth movement? [closed]

    - by Jamie
    Sorry that i couldnt answer on my previous post but it got closed. I couldnt answer because i had to prepair for my finals. As there were problems with understanding of what im trying to achieve, im going to describe a little bit more in depth. Im creating a game in which you steer a snake. I assume everybody knows how that works. But in my case the snake isnt just propagating in an array element by element. Imagine a 2Dgrid on which the snake moves. Its 10x10 tiles. Lets say one tile is 4x4 meters. The snakes head spawns in the middle of the (3,2) tile (beginning with (0,0)), so its position is (4*3+2,4*2+2)(the 2's are so that the snake is in the middle of the 4x4 tile). And heres where the fun begins. when the snake moves, it doesnt jump to next tile. Instead it moves a fraction of the way there. So lets say the snake is heading to tile (4,2). After it moved once, its position is (4*3+2+0.1,4*2+2), where 0.1 is the fraction of the way it moved. This is done to achieve smooth movement. So now im adding the rest of the body. The rest is supposed to move along the exact same path as the head did. I implemented it so that each part of the body has its own position and direction. Then i apply this algorithm: 1.Move each part in its direction. 2.If a part is in the middle of the tile(which implies all of them are), change each parts direction to the direction of the part proceeding it. As i said before i could make this work, but i cant stop thinking that im overlooking a much easier and cleaner solution. So this is my question. Is there any easier/better/faster way to do this?

    Read the article

  • "Whole-team" C++ features?

    - by Blaisorblade
    In C++, features like exceptions impact your whole program: you can either disable them in your whole program, or you need to deal with them throughout your code. As a famous article on C++ Report puts it: Counter-intuitively, the hard part of coding exceptions is not the explicit throws and catches. The really hard part of using exceptions is to write all the intervening code in such a way that an arbitrary exception can propagate from its throw site to its handler, arriving safely and without damaging other parts of the program along the way. Since even new throws exceptions, every function needs to provide basic exception safety — unless it only calls functions which guarantee throwing no exception — unless you disable exceptions altogether in your whole project. Hence, exceptions are a "whole-program" or "whole-team" feature, since they must be understood by everybody in a team using them. But not all C++ features are like that, as far as I know. A possible example is that if I don't get templates but I do not use them, I will still be able to write correct C++ — or will I not?. I can even call sort on an array of integers and enjoy its amazing speed advantage wrt. C's qsort (because no function pointer is called), without risking bugs — or not? It seems templates are not "whole-team". Are there other C++ features which impact code not directly using them, and are hence "whole-team"? I am especially interested in features not present in C. Update: I'm especially looking for features where there's no language-enforced sign you need to be aware of them. The first answer I got mentioned const-correctness, which is also whole-team, hence everybody needs to learn about it; however, AFAICS it will impact you only if you call a function which is marked const, and the compiler will prevent you from calling it on non-const objects, so you get something to google for. With exceptions, you don't even get that; moreover, they're always used as soon as you use new, hence exceptions are more "insidious". Since I can't phrase this as objectively, though, I will appreciate any whole-team feature. Appendix: Why this question is objective (if you wonder) C++ is a complex language, so many projects or coding guides try to select "simple" C++ features, and many people try to include or exclude some ones according to mostly subjective criteria. Questions about that get rightfully closed regularly here on SO. Above, instead, I defined (as precisely as possible) what a "whole-team" language feature is, provide an example (exceptions), together with extensive supporting evidence in the literature about C++, and ask for whole-team features in C++ beyond exceptions. Whether you should use "whole-team" features, or whether that's a relevant concept, might be subjective — but that only means the importance of this question is subjective, like always.

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • "Collection Wrapper" pattern - is this common?

    - by Prog
    A different question of mine had to do with encapsulating member data structures inside classes. In order to understand this question better please read that question and look at the approach discussed. One of the guys who answered that question said that the approach is good, but if I understood him correctly - he said that there should be a class existing just for the purpose of wrapping the collection, instead of an ordinary class offering a number of public methods just to access the member collection. For example, instead of this: class SomeClass{ // downright exposing the concrete collection. Things[] someCollection; // other stuff omitted Thing[] getCollection(){return someCollection;} } Or this: class SomeClass{ // encapsulating the collection, but inflating the class' public interface. Thing[] someCollection; // class functionality omitted. public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } We'll have this: // encapsulating the collection - in a different class, dedicated to this. class SomeClass{ CollectionWrapper someCollection; CollectionWrapper getCollection(){return someCollection;} } class CollectionWrapper{ Thing[] someCollection; public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } This way, the inner data structure in SomeClass can change without affecting client code, and without forcing SomeClass to offer a lot of public methods just to access the inner collection. CollectionWrapper does this instead. E.g. if the collection changes from an array to a List, the internal implementation of CollectionWrapper changes, but client code stays the same. Also, the CollectionWrapper can hide certain things from the client code - from example, it can disallow mutation to the collection by not having the methods setThing and removeThing. This approach to decoupling client code from the concrete data structure seems IMHO pretty good. Is this approach common? What are it's downfalls? Is this used in practice?

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • Generate GUID from any string using C#

    - by Haitham Khedre
    Some times you need to generate GUID from a string which is not valid for GUID constructor . so what we will do is to get a valid input from string that the GUID constructor will accept it. It is recommended to be sure that the string that you will generate a GUID from it some how unique. The Idea is simple is to convert the string to 16 byte Array which the GUID constructor will accept it. The code will talk : using System; using System.Text; namespace StringToGUID { class Program { static void Main(string[] args) { int tokenLength = 32; int guidByteSize = 16; string token = "BSNAItOawkSl07t77RKnMjYwYyG4bCt0g8DVDBv5m0"; byte[] b = new UTF8Encoding().GetBytes(token.Substring(token.Length - tokenLength, tokenLength).ToCharArray(), 0, guidByteSize); Guid g = new Guid(b); Console.WriteLine(g.ToString()); token = "BSNePf57YwhzeE9QfOyepPfIPao4UD5UohG_fI-#eda7d"; b = new UTF8Encoding().GetBytes(token.Substring(token.Length - tokenLength, tokenLength).ToCharArray(), 0, guidByteSize); g = new Guid(b); Console.WriteLine(g.ToString()); Console.Read(); } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   And The output: 37306c53-3774-5237-4b6e-4d6a59775979 66513945-794f-7065-5066-4950616f3455

    Read the article

  • Is this a valid implementation of the repository pattern?

    - by user1578653
    I've been reading up about the repository pattern, with a view to implementing it in my own application. Almost all examples I've found on the internet use some kind of existing framework rather than showing how to implement it 'from scratch'. Here's my first thoughts of how I might implement it - I was wondering if anyone could advise me on whether this is correct? I have two tables, named CONTAINERS and BITS. Each CONTAINER can contain any number of BITs. I represent them as two classes: class Container{ private $bits; private $id; //...and a property for each column in the table... public function __construct(){ $this->bits = array(); } public function addBit($bit){ $this->bits[] = $bit; } //...getters and setters... } class Bit{ //some properties, methods etc... } Each class will have a property for each column in its respective table. I then have a couple of 'repositories' which handle things to do with saving/retrieving these objects from the database: //repository to control saving/retrieving Containers from the database class ContainerRepository{ //inject the bit repository for use later public function __construct($bitRepo){ $this->bitRepo = $bitRepo; } public function getById($id){ //talk directly to Oracle here to all column data into the object //get all the bits in the container $bits = $this->bitRepo->getByContainerId($id); foreach($bits as $bit){ $container->addBit($bit); } //return an instance of Container } public function persist($container){ //talk directly to Oracle here to save it to the database //if its ID is NULL, create a new container in database, otherwise update the existing one //use BitRepository to save each of the Bits inside the Container $bitRepo = $this->bitRepo; foreach($container->bits as $bit){ $bitRepo->persist($bit); } } } //repository to control saving/retrieving Bits from the database class BitRepository{ public function getById($id){} public function getByContainerId($containerId){} public function persist($bit){} } Therefore, the code I would use to get an instance of Container from the database would be: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = $containerRepo->getById($id); Or to create a new one and save to the database: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = new Container(); $container->setSomeProperty(1); $bit = new Bit(); $container->addBit($bit); $containerRepo->persist($container); Can someone advise me as to whether I have implemented this pattern correctly? Thanks!

    Read the article

  • Java game applet development

    - by RomZes
    I'm getting 4 sec delay when sending objects over UDP. Working on small game and trying to implement multiplayer. For now just trying to synchronize movements of 2 balls on the screen. StartingPoint.java is my server(first player), that receiving serialized objects (coordinates). SecondPlayer.java is client that sending serialized objects to server. When I'm moving my first object it appears 4 seconds later on different screen. StartingPoint.java @Override public void run() { byte[] receiveData = new byte[256]; byte[] sendData = new byte[256]; // DatagramSocket socketS; try { socket = new DatagramSocket(5000); System.out.println("Socket created on "+ port + " port"); } catch (SocketException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } while(true){ b1.update(this); b3.update(); System.out.println("Starting server..."); //// Receiving and deserializing object try { //socket.setSoTimeout(1000); DatagramPacket packet = new DatagramPacket(buf, buf.length); socket.receive(packet); byte[] data = packet.getData(); ByteArrayInputStream in = new ByteArrayInputStream(data); ObjectInputStream is = new ObjectInputStream(in); // socket.setSoTimeout(300); b1 = (Ball) is.readObject(); } catch (IOException | ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } SecondPlayer.java @Override public void run() { while(true){ b.update(); networkSend(); repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } } public void networkSend(){ // Serialize to a byte array try { ByteArrayOutputStream bStream = new ByteArrayOutputStream(); ObjectOutputStream oo; oo = new ObjectOutputStream(bStream); oo.writeObject(b); oo.flush(); oo.close(); byte[] bufCar = bStream.toByteArray(); //socket = new DatagramSocket(); //socket.setSoTimeout(1000); InetAddress address = InetAddress.getByName("localhost"); DatagramPacket packet = new DatagramPacket(bufCar, bufCar.length, address, port); socket.send(packet); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • How to disable the RAID in x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • therubyracer on Ubuntu install issues with ruby-2.0.0-p247

    - by Victor S
    I can't seem to get therubyracer gem installed on ubuntu, i get the following errors, anyone can help? ERROR: Error installing therubyracer: ERROR: Failed to build gem native extension. /home/victorstan/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for main() in -lpthread... yes creating Makefile make compiling constraints.cc compiling array.cc compiling value.cc compiling invocation.cc compiling primitive.cc compiling trycatch.cc compiling context.cc compiling exception.cc compiling template.cc compiling accessor.cc compiling object.cc compiling script.cc compiling external.cc compiling stack.cc compiling gc.cc compiling backref.cc compiling heap.cc compiling v8.cc compiling constants.cc compiling date.cc compiling function.cc compiling rr.cc compiling message.cc compiling init.cc compiling string.cc compiling handles.cc compiling signature.cc compiling locker.cc linking shared-object v8/init.so g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_base.a: No such file or directory g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_snapshot.a: No such file or directory make: *** [init.so] Error 1 Update It seems it's trying to use a 32 bit library instead of a 64 bit library, anyone else experience issue trying to install 64 bit applications and packages on Ubuntu, and it trying to install 32 bit version instead (even though the system is a 64 Linux)? Related: https://github.com/cowboyd/therubyracer/issues/262

    Read the article

  • bonding module parameters are not shown in /sys/module/bonding/parameters/

    - by c4f4t0r
    I have a server with Suse 11 sp1 kernel 2.6.32.54-0.3-default, with modinfo bonding i see all parameters, but under /sys/module/bonding/parameters/ not modinfo bonding | grep ^parm parm: max_bonds:Max number of bonded devices (int) parm: num_grat_arp:Number of gratuitous ARP packets to send on failover event (int) parm: num_unsol_na:Number of unsolicited IPv6 Neighbor Advertisements packets to send on failover event (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation : 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner (slow/fast) (charp) parm: ad_select:803.ad aggregation selection logic: stable (0, default), bandwidth (1), count (2) (charp) parm: xmit_hash_policy:XOR hashing method: 0 for layer 2 (default), 1 for layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes: none (default), active, backup or all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC. none (default), active or follow (charp) in /sys/module/bonding/parameters ls -l /sys/module/bonding/parameters/ total 0 -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_grat_arp -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_unsol_na I found some of this parameters under /sys/class/net/bond0/bonding/, but when i try to change one i got the following error echo layer2+3 > /sys/class/net/bond0/bonding/xmit_hash_policy -bash: echo: write error: Operation not permitted

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • VMWare ESX, storage over 2TB

    - by Phliplip
    Hi, First of, i'm a webdeveloper and my server experience lies in setting up FreeBSD servers for webserver. I'm working on a project for at photographer, and i'm hired to develop a new online photo ordering system - where user of course can view their photos :) They have a massive need of storage, thus we have bought a HP G6 and 8x1TB SATA HDD. Our plan is to install VMWare ESX 4.0, running multiple virtual machines; FreeBSD 8 for webserver and some windows servers. Allready done that. Then mount one big storage to the BSD, and share it through Samba to the WinServers. The raid is set up with an array of 2x 1TB to handle the VMs. And the rest is setup as 3 2x1TB to handle the photo-data. Thus 2.73TB for photo-data (the raids are 1+0). Now if we add a datastore in the ESX and add the 3 LUNs we can get a datastore of 2.74TB. But i don't se how i can add this datastore direct to the VM. Only the BSD VM needs access to this. Only way is to create a VirtualDisk, with a max of 2TB (8MB blocksize). This is because the datastore where we save the virtualdisk has a maximum filesize of 2TB. Then add it as a harddisk to the BSD VM. In the 'Add Harddisk' pane for the VM, i see an option for Raw Disk Management. I think this is to access the datastore or the raid directly. Only problem is that its greyed out! Can i access the datastorage directly from the BSD? Without creating and adding virtualdisk.

    Read the article

  • SATA drives or chipset throwing DRDY ERR and ICRC ABRT

    - by Matt
    I have an SD-VIA-1A2S PCI card with 2 sata ports (and one ATA-133 that isn't used). Two new Western Digital Caviar Green drives (WD10EARS 1TB) throw repeated errors in kern.log (removed date/time/host info for brevity): [ 7.376475] ata2.00: exception Emask 0x12 SAct 0x0 SErr 0x1000500 action 0x6 [ 7.376480] ata2.00: BMDMA stat 0x5 [ 7.376483] ata2: SError: { UnrecovData Proto TrStaTrns } [ 7.376489] ata2.00: cmd c8/00:40:20:00:00/00:00:00:00:00/e0 tag 0 dma 32768 in [ 7.376490] res 51/84:2f:20:00:00/00:00:00:00:00/e0 Emask 0x12 (ATA bus error) [ 7.376493] ata2.00: status: { DRDY ERR } [ 7.376495] ata2.00: error: { ICRC ABRT } [ 7.376504] ata2: hard resetting link I'm using Ubuntu 9.04 - 2.6.28-18-generic, though I have tried live cds of Ubuntu 9.10, Fedora 12 and OpenSUSE 11.2 - all running various 2.6.31 kernels - and all received the same error. Based on testing these drives and this card in two other machines and combos of connecting the drives directly to the motherboard or the add-in card, I'm relatively convinced that it's the VIA chipset that is the problem. Another computer that also has an onboard VIA SATA chipset (like the add-in card) produces the same errors when the drives are directly on that motherboard. I have been able to verify that the drives are perfectly good, and I tried everything I can think of in terms of swapping cables, psu isn't overloaded, etc. The error happens on boot once or twice, after using fdisk on the drive once or twice, and constantly when attempting to sync a new mdadm raid 1 array created on the two drives. Any thoughts on where to go from here - driver/kernel wise? I'm completely open to buying a new PCI add-in card if someone can recommend one with 2 internal sata ports that works well in Debian/Ubuntu. Thanks!

    Read the article

  • Uploadify Flash Uploader and Random UPLOAD_ERR_CANT_WRITE errors

    - by dcneiner
    I am using Uploadify to provide progress bar support for file uploads on a PHP app I built. It works perfectly for a few uploads,then every few uploads it fails and the data from the $_FILES array reveals an UPLOAD_ERR_CANT_WRITE error. (Error code 7). I ran Paros proxy between my browser and the server to see the difference between a passing and failing request. The only difference was the content separator for the multi-part post which changes every time. I would conclude this was fully a server error, except with a plain jane form, I cannot reproduce the error. I am not a server guy, so please let me know what information is needed to troubleshoot this and I will update the question with those details. I did place these lines in the .htaccess, but to know avail. The site is hosted on Rackspace Cloudsites so my configuration options are limited: <IfModule mod_security.c> SecFilterEngine Off SecFilterScanPOST Off </IfModule> php_value upload_max_filesize 10M php_value post_max_size 10M php_value max_execution_time 200 php_value max_input_time 200

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

  • RAID 5 - DELL 2850 and others

    - by Kiara
    I have installed Ubuntu on a DELL 2850 and I have configured an array of 5 disks (SCSI 73GB 10K) in RAID5. I wanted to simulate a drive error so in the middle of something I just took one of the drives out and put it back again after a bit. Then the drive shows an orange light and seems to be rebuilding but actually is taking hours and hours with no results. So I went to the PERC utility (Ctrl+M) and the disk shows "REBLD". But it never gets to an online state. So I went to Objects - Physical drives - Rebuilding - View rebuild process. And in here I can see a bar moving from 0%... but if I reboot before finishing and get into the PERC Utility again, it seems to start again rebuilding from 0% - so it is not rebuilding automatically. My concern is: what would happen in a real situation? Do I have to just switch the server off and go to the Perc utility to start the rebuilding manually? I thought the whole point was to have this done automatically and without the need of stopping the server. Or does it perhaps rebuild automatically indeed but needs to have enough time without rebooting because otherwise the rebuilding process will start from scratch? It seems to take more than 3h for a 73gb disk! My second question is: can I mix then hard drives? So if I have a RAID of 5x73GB 10K can I use different size (146GB) or speeds (15K)? Apparently someone said it is OK in here Poweredge 2850: replace disk with larger in RAID?

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • NVidia ION and /dev/mapper/nvidia_... issues.

    - by Ritsaert Hornstra
    I have an NVidia ION board with 4 SATA ports and want to use that to run a Linux Server (CentOS 5.4). I first hooed up 3 HDs (that will be a RAID5 array) and a forth small boot HD. I first started to use the onboard RAID capability but that does not work correctly under Linux: the raid capacity is not a real RAID but uses lvm to define some arays. After setting the BIOS back to normal SATA mode and whiping the HDs, the first boot harddisk (/dev/sda) is seen as /dev/sda BEFORE mounting and after mounting as /dev/mapper/nvidia_. CentOS is unable to install on it (and grub is not installable on it either). So somehow the harddisk is still seen as if it belongs to some lvm volume. I tried to clean out the HD by issuing a few dd if=/dev/zero of=/dev/sda commands to wipe the starting cylinders and final cylinders but to no avail. Did anyone see this problem and did anyone find a solution? UPDATE When I create only a single ext3 partition on the first HD (/dev/mapper/nvidia_...) no LVM partitions are seen and I can boot from /dev/mapper/nvidia_.... Now the next step is to see how I can get rid of this folly.

    Read the article

  • Can't get an IBM xSeries 345 server to load Windows Server 2003 using ServerGuide utility

    - by Kyle Noland
    I have a client that has an IBM xSeries 345 eServer. Per the IBM support website, I have downloaded the ServerGuide Setup 7.4.17 installation ISO and burned a bootable CD. The CD boots fine and loads the utility. I walk through the following screens without any issue: Set the date and Time Detect the IBM ServeRAID card and install the latest firmware Clear the hard disks Set up the RAID array The next step is format the NOS partition. I select my partition size and the utility goes through the following steps: Creating NOS partition Formatting NOS partition (NTFS) Copying W32 files The copying W32 files takes about 10 minutes. I see the CD drive and disks working hard. When the copying is complete, I'm taken to a blank page just NOS Partitioning at the top. At the bottom of the screen are the familiar Back and Exit buttons. I see the place where the Next button should be, and if I click on it I can tell there is something there, but the space is empty. No button is displayed and clicking the empty spot doesn't ever take me to the next screen. I can't load the OS until I get past this part. I have already tried: Burning multiple copies and versions of the ServerGuide CD Letting the final screen just sit there over the weekend thinking it might advance after syncing the drives or something Has anybody else seen this? I'm really at a loss here. EDIT: I found another person who has the exact same problem as me: http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14451763

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Unable to authenticate to Windows Server 2003 for file browsing as non-administrator user.

    - by Fopedush
    I've got a windows server 2003 box containing a raid 5 array I use for mass storage. I want to set up a special non-administrator account that can be used to browse files over the network, with only read access. Ideally I'll map my network drive as this user to avoid accidentally hosing my data, and mount as an administrator user on occasions where I actually need write access. I've created a non-administrator user on the Windows Server box (called "ReadOnly)", and granted the user read permissions on the folders I need. However, when I try to browse to the files, and authenticate as this user, I'm told "Permission denied". If I throw the readOnly user into the administrators group, however, I can authenticate and browse just fine. I am, of course, only attempting to browse to folder for which I have given this user read permissions. Obviously my ReadOnly user is missing some privilege here, but I can't figure out what it is. I've been digging around in group policy editor all day to no avail. What am I missing? Fake Edit: I'm doing my browsing from a Windows 7 box, but I don't think that is relevant.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >