Search Results

Search found 29473 results on 1179 pages for 'solaris 10'.

Page 519/1179 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Network folder image thumbnails taking long time to load

    - by Steve
    Our internal network folder can take up to 5 seconds to generate a thumbnail for each photo in a network folder. Usually, only 10% of thumbnails actually display; the rest are default jpg icons. A thumbs.db file already exists inside this folder, so presumably the thumbnails have been generated before. Why do they have to be generated again? The PC is Win7 64-bit. The server is Windows 2003 Server SP2.

    Read the article

  • Cocos2d-iPhone: CCSprite positions differ between Retina & non-Retina Screens

    - by bobwaycott
    I have a fairly simple app built using cocos2d-iphone, but a strange positioning problem that I've been unable to resolve. The app uses sprite sheets, and there is a Retina and non-Retina sprite sheet within the app that use the exact same artwork (except for resolution, of course). There are other artwork within the app used for CCSprites that are both standard and -hd suffixed. Within the app, a group of sprites are created when the app starts. These initially created CCSprites always position identically (and correctly) on Retina & non-Retina screens. // In method called to setup sprites when app launches // Cache & setup app sprites [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile: @"sprites.plist"]; sprites = [CCSpriteBatchNode batchNodeWithFile: @"sprites.png"]; hill = [CCSprite spriteWithSpriteFrameName: @"hill.png"]; hill.position = ccp( 160, 75 ); [sprites addChild: hill z: 1]; // ... [create more sprites in same fashion] // NOTE: All sprites created here have correct positioning on Retina & non-Retina screens When a user taps the screen a certain way, a method is called that creates another group of CCSprites (on- and off-screen), animating them all in. One of these sprites, hand, is always positioned identically (and correctly) on Retina & non-Retina screens. The others (a group of clouds) successfully create & animate, but their positions are correct only on Retina displays. On a non-Retina display, each cloud has incorrect starting positions (incorrect for x, y, or sometimes both), and their ending positions after animation are also wrong. I've included the responsible code below from the on-touch method that creates the new sprites and animates them in. Again, it works as expected on a Retina display, but incorrectly on non-Retina screens. The CCSprites used are created in the same way at app-start to setup all the initial sprites in the app, which always have correct positions. // Elsewhere, in a method called on touch // Create clouds cloud1 = [CCSprite spriteWithSpriteFrameName: @"cloud_1.png"]; cloud1.position = ccp(-150, 320); cloud1.scale = 1.2f; cloud2 = [CCSprite spriteWithSpriteFrameName: @"cloud_2.png"]; cloud2.position = ccp(-150, 335); cloud2.scale = 1.3f; cloud3 = [CCSprite spriteWithSpriteFrameName: @"cloud_4.png"]; cloud3.position = ccp(-150, 400); cloud4 = [CCSprite spriteWithSpriteFrameName: @"cloud_5.png"]; cloud4.position = ccp(-150, 420); cloud5 = [CCSprite spriteWithSpriteFrameName: @"cloud_3.png"]; cloud5.position = ccp(400, 350); cloud6 = [CCSprite spriteWithSpriteFrameName: @"cloud_1.png"]; cloud6.position = ccp(400, 335); cloud6.scale = 1.1f; cloud7 = [CCSprite spriteWithSpriteFrameName: @"cloud_2.png"]; cloud7.flipY = YES; cloud7.flipX = YES; cloud7.position = ccp(400, 380); // Create hand hand = [CCSprite spriteWithSpriteFrameName:@"hand.png"]; hand.position = ccp(160, 650); [sprites addChild: cloud1 z: 10]; [sprites addChild: cloud2 z: 9]; [sprites addChild: cloud3 z: 8]; [sprites addChild: cloud4 z: 7]; [sprites addChild: cloud5 z: 6]; [sprites addChild: cloud6 z: 10]; [sprites addChild: cloud7 z: 8]; [sprites addChild: hand z: 10]; // ACTION!! [cloud1 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(70, 320)]]; [cloud2 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(60, 335)]]; [cloud3 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(100, 400)]]; [cloud4 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(80, 420)]]; [cloud5 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(250, 350)]]; [cloud6 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(250, 335)]]; [cloud7 runAction:[CCMoveTo actionWithDuration: 1.0f position: ccp(270, 380)]]; [hand runAction: handIn]; It may be worth mentioning that I see this incorrect positioning behavior in the iOS Simulator when running the app and switching between the standard iPhone and iPhone (Retina) hardware options. I have not been able to verify this occurs or does not occur on an actual non-Retina iPhone because I do not have one. However, this is the only time I see this odd positioning behavior occur (the incorrect results obtained after user touch), and since I'm creating all sprites in exactly the same way (i.e., [CCSprite spriteWithSpriteFrameName:] and then setting position with cpp()), I would be especially grateful for any help in tracking down why this single group of sprites are always incorrect on non-Retina screens. Thank you.

    Read the article

  • Fail2ban memory usage

    - by ltsstar
    Since my server is under a sustain DNS amplification attack (DDOS), I configured fail2ban and initially my outgoing traffic dropped markedly. Anyway, after a few hours (mostly +10), fail2ban uses about 75% ram and seems to be crashed in some way, because the outgoing traffic raises imediatly after. When I searched the web for the memory problem, I found some people complaining about high fail2ban memory usages as well. But the recommended solution, to insert an ulimit command into a fail2ban config file, did not change that much for me.

    Read the article

  • How to make negate_unary work with any type?

    - by Chan
    Hi, Following this question: How to negate a predicate function using operator ! in C++? I want to create an operator ! can work with any functor that inherited from unary_function. I tried: template<typename T> inline std::unary_negate<T> operator !( const T& pred ) { return std::not1( pred ); } The compiler complained: Error 5 error C2955: 'std::unary_function' : use of class template requires template argument list c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 223 1 Graphic Error 7 error C2451: conditional expression of type 'std::unary_negate<_Fn1>' is illegal c:\program files\microsoft visual studio 10.0\vc\include\ostream 529 1 Graphic Error 3 error C2146: syntax error : missing ',' before identifier 'argument_type' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 4 error C2065: 'argument_type' : undeclared identifier c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 2 error C2039: 'argument_type' : is not a member of 'std::basic_ostream<_Elem,_Traits>::sentry' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 6 error C2039: 'argument_type' : is not a member of 'std::basic_ostream<_Elem,_Traits>::sentry' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 230 1 Graphic Any idea? Update Follow "templatetypedef" solution, I got new error: Error 3 error C2831: 'operator !' cannot have default parameters c:\visual studio 2010 projects\graphic\graphic\main.cpp 39 1 Graphic Error 2 error C2808: unary 'operator !' has too many formal parameters c:\visual studio 2010 projects\graphic\graphic\main.cpp 39 1 Graphic Error 4 error C2675: unary '!' : 'is_prime' does not define this operator or a conversion to a type acceptable to the predefined operator c:\visual studio 2010 projects\graphic\graphic\main.cpp 52 1 Graphic Update 1 Complete code: #include <iostream> #include <functional> #include <utility> #include <cmath> #include <algorithm> #include <iterator> #include <string> #include <boost/assign.hpp> #include <boost/assign/std/vector.hpp> #include <boost/assign/std/map.hpp> #include <boost/assign/std/set.hpp> #include <boost/assign/std/list.hpp> #include <boost/assign/std/stack.hpp> #include <boost/assign/std/deque.hpp> struct is_prime : std::unary_function<int, bool> { bool operator()( int n ) const { if( n < 2 ) return 0; if( n == 2 || n == 3 ) return 1; if( n % 2 == 0 || n % 3 == 0 ) return 0; int upper_bound = std::sqrt( static_cast<double>( n ) ); for( int pf = 5, step = 2; pf <= upper_bound; ) { if( n % pf == 0 ) return 0; pf += step; step = 6 - step; } return 1; } }; /* template<typename T> inline std::unary_negate<T> operator !( const T& pred, typename T::argument_type* dummy = 0 ) { return std::not1<T>( pred ); } */ inline std::unary_negate<is_prime> operator !( const is_prime& pred ) { return std::not1( pred ); } template<typename T> inline void print_con( const T& con, const std::string& ms = "", const std::string& sep = ", " ) { std::cout << ms << '\n'; std::copy( con.begin(), con.end(), std::ostream_iterator<typename T::value_type>( std::cout, sep.c_str() ) ); std::cout << "\n\n"; } int main() { using namespace boost::assign; std::vector<int> nums; nums += 1, 3, 5, 7, 9; nums.erase( remove_if( nums.begin(), nums.end(), !is_prime() ), nums.end() ); print_con( nums, "After remove all primes" ); } Thanks, Chan Nguyen

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Why some "non-profit" hoax and spam are created? [closed]

    - by naxa
    Many spam/hoax has no direct link to any ripoff site or similar, they're just making sure people spread them ("forward this to at least 10 people or else"). Some of those may be created out of good faith, I'm not interested in those... But the rest, I since long suspect that there is some other reason for making them other than making fun of people (without getting much of the feedback)... Why are these created?

    Read the article

  • How much memory do I need for innodb buffer pool?

    - by Shirko
    I read here that You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces) On the other hand I've read elswher that innodb_buffer_pool_size must be up to %80 of the memory. So I'm really confused how should I choose the best size for the pool. My database size is about 6GB and my total memory 64GB. Also I'm wondering if I increase the buffer pool size, I should shrink the number of maximum connections to make room for extra buffer, or these parameters are independent. Thanks

    Read the article

  • What would be the best schema to store the 'address' for different entities?

    - by Cesar
    Suppose we're making a system where we have to store the addrees for buildings, persons, cars, etc. The address 'format' should be something like: State (From a State list) County (From a County List) Street (free text, like '5th Avenue') Number (free text, like 'Chrysler Building, Floor 10, Office No. 10') (Yes I don't live in U.S.A) What would be the best way to store that info: Should I have a Person_Address, Car_Address, ... Or the address info should be in columns on each entity, Could we have just one address table and try to link each row to a different entity? Or are there another 'better' way to handle this type of scenario? How would yo do it?

    Read the article

  • Encrypted partitions with redundancy on ubuntu server

    - by Flamewires
    Hey I have to make a file system with an encrypted partition with on ubuntu server. something like Unencrypted: / - 10 GB /home - 10GB /var - 5GB -------------- Encrypted: /opt - 50GB This I can figure out in the setup, just partition as normal, setup /tmp as a encrypted volume with dm-crypt. However im not sure how to mirror this entire drive, so that if either failed i could still boot. and how will that affect the encrypted partition. Any help would be appreciated.

    Read the article

  • Getting SEC to only monitor latest version of a log file?

    - by user439407
    I have been tasked with running SEC to help correlate PHP logs. The basic setup is pretty straightforward, the problem I'm having is that we want to monitor a log file whose name contains the date(php-2012-10-01.log for instance). How can I tell SEC to only monitor the latest version of the file(and of course switch to the newest log file every day at midnight) I could do something like create a latest version of the file that links to the latest version and run a cron job at midnight to update the link, but I am looking for a more elegant solution

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • Automatically restore windows network drive (red "X") via batch file

    - by user33958
    i need check if a network drive is mapped and accessible. From time to time windows displays a red X on the drive, and i would need to manually click the drive in explorer to reconnect. I already found solutions which involve editing the registry which unfortunately isn´t possible. So i would need a batch file checking for connection, and (re-)mounting the drive. What i´m using at the moment: IF NOT EXIST z: net use z: \\10.211.55.5\test

    Read the article

  • Transaction Log filling up on SQL database set to Simple

    - by Will
    We have a database on a SQL 2005 server that is set to Simple transaction mode. The logging is set to 1 MB and is set to grow by 10% when it needs to. We keep running into an issue where the transaction log fills up and we need to shrink it. What could cause the transaction log to fill up when its set to Simple and unrestricted growth is allowed?

    Read the article

  • strange array in php

    - by tunpishuang
    here i wrote a function , it's general purpose is to get an array of the depIds under the parent root $depId. i use recursion method to get the array. public function getEmpsByDep($depId){ $query = "select * from ".SQLPREFIX."department where id_parent=".$depId; $stmt=$this->db->query($query); while(($row=$this->db->fetch_assoc($stmt))==true) { if($this->hasChildNode($row['DEPID'])) { $depId = $row['DEPID']; self::getEmpsByDep($depId); } else { $arr[]=$row['DEPID']; } } return ($arr); } while i think it should return a 1D array of the depid.but it return a strange 2D array like this: array(4) { [0]=> string(2) "11" [1]=> string(2) "12" [2]=> string(2) "13" [3]=> string(2) "14" } array(3) { [0]=> string(2) "19" [1]=> string(2) "20" [2]=> string(2) "21" } array(3) { [0]=> string(2) "15" [1]=> string(2) "16" [2]=> string(2) "17" } array(8) { [0]=> string(1) "2" [1]=> string(1) "4" [2]=> string(1) "5" [3]=> string(1) "6" [4]=> string(1) "7" [5]=> string(1) "8" [6]=> string(1) "9" [7]=> string(2) "10" } here is the table structure and data sample: $query[]="create table ".$sqltblpre."department( depId number(10) not null primary key, depName varchar2(50) not null, id_parent number(10) )"; //department(?????) $index=1; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',0)"; //1 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',0)"; //2 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',0)"; //3 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',0)"; //4 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',0)"; //5 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',0)"; //6 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'?????',0)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'????',0)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'????',0)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'????',0)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',1)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',1)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',1)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',1)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',3)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',3)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',3)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',3)"; //18 $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',18)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'???',18)"; $query[] = "INSERT INTO ".$sqltblpre."department values(".$index++.",'??',18)"; so in a word, how can i get the 1D array thought the right code of this function?

    Read the article

  • How to increase the number of vmware.log files?

    - by 27b6
    Hi all, Is there any way to increase the number of vmware.log files for a given VM (Vmware Server 1.0.4, host: SLES 10)? Problem is, that one of our VMs crashes every now and then, and I can only see the four log files vmware.log, vmware-0.log, vmware-1.log and vmware-2.log each of which contains records starting after the crash when trying to boot. So I have no information of what happened before the crash. Thanks, Ingo

    Read the article

  • How to access Jenkins remotely on Ubuntu 12.04 server?

    - by quincyglenn
    I have installed Jenkins and opened port 8080 on Ubuntu 12.04 server but still can't access Jenkins remotely. Below is the procedures I took. # Install Jenkins, enable UFW and open port 8080 sudo apt-get install jenkins sudo ufw enable sudo ufw allow 8080 sudo ufw reload # Check the status sudo ufw status 8080 ALLOW Anywhere 8080 ALLOW Anywhere (v6) # Locally curl -I localhost:8080 HTTP/1.1 200 OK Server: Winstone Servlet Engine v0.9.10 ... # On an external machine curl -I [ip]:8080 couldn't connect to host

    Read the article

  • single web app multible webservers

    - by Ramakrishna
    hi guys, Hi have a prob of load balancing we developed a web app using nearly 1500/- users parallely so the number of users increased we are unable to serve the requests that much of faster, its takes around 10 to 20 sec to load a page if page having heavy weight it takes 1min also. we need to solve this situation and we have to serve each request with in 2 to 3 sec App develped in : asp.net hosted in : IIS 7.5 Machine configuration : Windows server 2008 8GB RAM 1MBPS band width

    Read the article

  • ubuntu bash shell

    - by redcoder
    hi, i am very new to ubuntu.... struggling to find the proper command to install Zend Server on my Ubuntu9.10 Server. After downloading the ZF Server and extracted, i try to run this command install_zs.sh 5.3 This is the ls of the extracted ZF archive install_zs.sh README upgrade_zs_php.sh zend.deb.repo zend.rpm.repo But it says command not found ... any idea ?

    Read the article

  • Remote deployment of OS X Mountain Lion 10.8 upgrade, not as a fresh install

    - by Dean A. Vassallo
    Anyone have any ideas or suggestions (or know if its even possible) to remote upgrade a fleet of Macs from 10.6.8 to 10.8 remotely. I presume I can push the installESD through ARD, but I want it to run completely unattended. If it is not possible through "traditional" methods does anyone know of any tools that might help automate this process? Thank you for your thoughts, feedback, and suggestions.

    Read the article

  • bash split text into limited character buckets (array member)

    - by soField
    i have text such as http://pastebin.com/H8zTbG54 we can say this text is set of rules splitted by "OR" at the end of lines i need to put set of lines(rules) into buckets (bash array members) but i have character limit for each array member which is 1024 so each array member should contain set of rules but character count for each array member can not exceed 1024 can anybody help me to do that solaris 10

    Read the article

  • C++: How to count all instantiated objects at runtime?

    - by nina
    I have a large framework consisting of many C++ classes. Is there a way using any tools at runtime, to trace all the C++ objects that are being constructed and currently exist? For example, at a certain time t1, perhaps the application has objects A1, A2 and B3, but at time t2, it has A1, A4, C2 and so on? This is a cross platform framework but I'm familiar with working in Linux, Solaris and (possibly) Mac OS X.

    Read the article

  • Where is my Git/Ungit Packages?

    - by T?n Tri?n Nguy?n
    I've install these follow packages: node --version : v0.10.4 npm --version : 1.2.18 git --version : 1.7.1 and i used this command: npm install -g ungit I want to use Ungit/Git via apache. But i don't know where is Git/Ungit DocumentRoot to define on virtualhost 80. I've tried to search folder which's name git or ungit but it seems not really exactly. Anybody help me about this? very thanks.

    Read the article

  • How many megabytes per second may I expect from a gigabit USB 2.0 network card under linux?

    - by Nakedible
    I'm interested in the actual real-word throughput attainable with an external 1000BaseT USB 2.0 network card under Linux. I have been able to attain 90 megabytes per second on a PCI-E interface, but the USB 2.0 bus has a theoretical limit of 480Mbit/s, and in practice less than 40 megabytes per second. Is the actual throughput attainable with such a card under linux 40, 30, 20, or even as low as 10 megabytes per second, eg. no better than a normal 100BaseT network card?

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >