Search Results

Search found 25442 results on 1018 pages for 'disk size'.

Page 156/1018 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Get JVM to grow memory demand as needed up to size of VM limit?

    - by Ira Baxter
    We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows explicit setting of max VM sizes (various -Xm* commands), and you'd think that would be designed to be adequate. We try to do this in a .cmd file that we ship with the application. But if you pick any specific number, you get one of two bad behaviors: 1) if your number is small enough to work on most target systems (e.g., 1Gb), it isn't big enough for big data, or 2) if you make it very large, the JVM refuses to run on those systems whose actual VM is smaller than specified. How does one set up Java to use the available VM when needed, without knowing that number in advance, and without grabbing it all on startup?

    Read the article

  • How to set the image into fit screen in the image view

    - by Pugal Devan
    Hi, I am new to iphone development. I want to display the actual size of the image in image view. I have created image view by using Interface builder and set the properties. Now the problem is, I have set into "Scale to Fill", then the image will be stretched in the full screen. Now i want to display the actual size of the image will be displayed in image view. For example 52X52 size image should be displayed with the same size in the image view.The 1200X1020 size image should be fit to the size of the image view .So according the size of the image it should fit to image view and i want the image smaller than image view should retain its original size(It should not stretch to fit the image view). Is it any possible solution to achieve it, please guide me. Thanks.

    Read the article

  • Kickstarting VMWare ESX 4.1 (Error: No NIC with name bootif)

    - by William
    I'm having an issue kickstarting an installation of VMWaare ESX Classic 4.1. I've stripped down my kickstart a bit to just: accepteula keyboard us auth clearpart --firstdisk --overwritevmfs url --url=10.16.0.1/cblr/ks_mirror/esx-classic-4.1.0-260247 rootpw --iscrypted $1$zZJa3g7g$mD8d.6QgbPku1QovQTAps/ timezone 'US/Pacific' network --addvmportgroup=true --device=vmnic0 --bootproto=dhcp part '/boot' --fstype=ext3 --size=1100 --onfirstdisk part 'none' --fstype=vmkcore --size=110 --onfirstdisk part 'datastore1' --fstype=vmfs3 --size=8920 --grow --onfirstdisk virtualdisk 'esxconsole' --size=7920 --onvmfs='datastore1' part 'swap' --fstype=swap --size=916 --onvirtualdisk='esxconsole' part '/var/log' --fstype=ext3 --size=2000 --onvirtualdisk='esxconsole' part '/' --fstype=ext3 --size=5000 --grow --onvirtualdisk='esxconsole' %post --interpreter=bash However, when I attempt to use this kickstart during a PXE install with no additional kernel options, I get the following error: There was a problem with the Network Device specified on the command line. Error: No NIC found with name bootif If I comment out the network line in the kickstart, the error changes to: There was a problem with the Network Device specified on the command line. Error: No NIC found with name eth0 How can I fix this? Thanks.

    Read the article

  • abstract data type list. . .

    - by aldrin
    A LIST is an ordered collection of items where items may be inserted anywhere in the list. Implement a LIST using an array as follows: struct list { int *items; // pointer to the array int size; // actual size of the array int count; // number of items in the array }; typedef struct list *List; // pointer to the structure Implement the following functions: a) List newList(int size); - will create a new List and return its pointer. Allocate space for the structure, allocate space for the array, then initialize size and count, return the pointer. b) void isEmpty(List list); c) void display(List list); d) int contains(List list, int item); e) void remove(List list, int i) ; f) void insertAfter(List list,int item, int i); g) void addEnd(List list,int item) - add item at the end of the list – simply store the data at position count, then increment count. If the array is full, allocate an array twice as big as the original. count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 addEnd(list,40) will result to count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 40 h) void addFront(List list,int item) - shift all elements to the right so that the item can be placed at position 0, then increment count. Bonus: if the array is full, allocate an array twice as big as the original. count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 addFront(list,40) will result to count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 10 15 20 30 i) void removeFront(List list) - shift all elements to the left and decrement count; count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 10 15 20 30 removeFront(list) will result to count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 j) void remove(List list,int item) - get the index of the item in the list and then shift all elements to the count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 10 15 20 30 remove(list,10) will result to count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 15 20 30

    Read the article

  • 32bit domU on 64bit dom0

    - by ModuleC
    I'm using 64bit Centos Dom0 with: 2.6.18-164.15.1.el5xen #1 SMP Wed Mar 17 12:04:23 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux recently i migrated some 32bit Centos domUs on this node. As by specs 32bit domUs should work with 64bit dom0. DomUs are paravirtualized, and everything works, except iptables limit. Anyways runing csf on domU will return following messages in dmesg: ip_tables: (C) 2000-2006 Netfilter Core Team Netfilter messages via NETLINK v0.30. ip_conntrack version 2.4 (2080 buckets, 16640 max) - 304 bytes per conntrack ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 Doing lsmod on both dom0 and domU is listing all iptables modules required as loaded. I've found this http://www.mail-archive.com/[email protected]/msg189433.html But didn't find anything for centos on this issue. Am I missing something?

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Find element with attribute with minidom

    - by Xster
    Given <field name="frame.time_delta_displayed" showname="Time delta from previous displayed frame: 0.000008000 seconds" size="0" pos="0" show="0.000008000"/> <field name="frame.time_relative" showname="Time since reference or first frame: 0.000008000 seconds" size="0" pos="0" show="0.000008000"/> <field name="frame.number" showname="Frame Number: 2" size="0" pos="0" show="2"/> <field name="frame.pkt_len" showname="Packet Length: 1506 bytes" hide="yes" size="0" pos="0" show="1506"/> <field name="frame.len" showname="Frame Length: 1506 bytes" size="0" pos="0" show="1506"/> <field name="frame.cap_len" showname="Capture Length: 1506 bytes" size="0" pos="0" show="1506"/> <field name="frame.marked" showname="Frame is marked: False" size="0" pos="0" show="0"/> <field name="frame.protocols" showname="Protocols in frame: eth:ip:tcp:http:data" size="0" pos="0" show="eth:ip:tcp:http:data"/> How do I get the field with name="frame.len" right away without iterating through every tag and checking the attributes?

    Read the article

  • TCP Tweaking options and Results: Any suggestions?

    - by krishnakumar
    I first tried with the default windows XP TCP option(It doesn't have TCPWindowSize option and TCP1323 in its Registry setting). I dynamically set those options using TCP optimizer. Here I list out the result with and without TCP Tweaking option. I see no major improvements in TCP after increasing window size optimally too. What value should I set to increase the performance? Results: Without any window size and MTU setting from server to client (receiving) TCPWindowSize : MTU : TTL: Size:586 MB total duration : 03:47 With window size extension from server to client (receiving) Bandwidth :100 Mbps Latency: 100ms BDP :1250000 TCPWindowSize : 1250000 MTU :1500 TTL:128 Size:586MB total duration : 03:44 With window size extension from server to client (receiving) TCPWindowSize :64240 MTU :1500 TTL :112 Size: 586MB total duration : 03:49

    Read the article

  • Lua and Objective C not running script.

    - by beta
    I am trying to create an objective c interface that encapsulates the functionality of storing and running a lua script (compiled or not.) My code for the script interface is as follows: #import <Cocoa/Cocoa.h> #import "Types.h" #import "lua.h" #include "lualib.h" #include "lauxlib.h" @interface Script : NSObject<NSCoding> { @public s32 size; s8* data; BOOL done; } @property s32 size; @property s8* data; @property BOOL done; - (id) initWithScript: (u8*)data andSize:(s32)size; - (id) initFromFile: (const char*)file; - (void) runWithState: (lua_State*)state; - (void) encodeWithCoder: (NSCoder*)coder; - (id) initWithCoder: (NSCoder*)coder; @end #import "Script.h" @implementation Script @synthesize size; @synthesize data; @synthesize done; - (id) initWithScript: (s8*)d andSize:(s32)s { self = [super init]; self->size = s; self->data = d; return self; } - (id) initFromFile:(const char *)file { FILE* p; p = fopen(file, "rb"); if(p == NULL) return [super init]; fseek(p, 0, SEEK_END); s32 fs = ftell(p); rewind(p); u8* buffer = (u8*)malloc(fs); fread(buffer, 1, fs, p); fclose(p); return [self initWithScript:buffer andSize:size]; } - (void) runWithState: (lua_State*)state { if(luaL_loadbuffer(state, [self data], [self size], "Script") != 0) { NSLog(@"Error loading lua chunk."); return; } lua_pcall(state, 0, LUA_MULTRET, 0); } - (void) encodeWithCoder: (NSCoder*)coder { [coder encodeInt: size forKey: @"Script.size"]; [coder encodeBytes:data length:size forKey:@"Script.data"]; } - (id) initWithCoder: (NSCoder*)coder { self = [super init]; NSUInteger actualSize; size = [coder decodeIntForKey: @"Script.size"]; data = [[coder decodeBytesForKey:@"Script.data" returnedLength:&actualSize] retain]; return self; } @end Here is the main method: #import "Script.h" int main(int argc, char* argv[]) { Script* script = [[Script alloc] initFromFile:"./test.lua"]; lua_State* state = luaL_newstate(); luaL_openlibs(state); luaL_dostring(state, "print(_VERSION)"); [script runWithState:state]; luaL_dostring(state, "print(_VERSION)"); lua_close(state); } And the lua script is just: print("O Hai World!") Loading the file is correct, but I think it messes up at pcall. Any Help is greatly appreciated. Heading

    Read the article

  • creating a Menu from SQLite values in Java

    - by shanahobo86
    I am trying to create a ListMenu using data from an SQLite database to define the name of each MenuItem. So in a class called menu.java I have defined the array String classes [] = {}; which should hold each menu item name. In a DBAdapter class I created a function so the user can insert info to a table (This all works fine btw). public long insertContact(String name, String code, String location, String comments, int days, int start, int end, String type) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_NAME, name); initialValues.put(KEY_CODE, code); initialValues.put(KEY_LOCATION, location); initialValues.put(KEY_COMMENTS, comments); initialValues.put(KEY_DAYS, days); initialValues.put(KEY_START, start); initialValues.put(KEY_END, end); initialValues.put(KEY_TYPE, type); return db.insert(DATABASE_TABLE, null, initialValues); } It would be the Strings inserted into KEY_NAME that I need to populate that String array with. Does anyone know if this is possible? Thanks so much for the help guys. If I implement that function by Sam/Mango the program crashes, am I using it incorrectly or is the error due to the unknown size of the array? DBAdapter db = new DBAdapter(this); String classes [] = db.getClasses(); edit: I should mention that if I manually define the array: String classes [] = {"test1", "test2", "test3", etc}; It works fine. The error is a NullPointerException Here's the logcat (sorry about the formatting). I hadn't initialized with db = helper.getReadableDatabase(); in the getClasses() function but unfortunately it didn't fix the problem. 11-11 22:53:39.117: D/dalvikvm(17856): Late-enabling CheckJNI 11-11 22:53:39.297: D/TextLayoutCache(17856): Using debug level: 0 - Debug Enabled: 0 11-11 22:53:39.337: D/libEGL(17856): loaded /system/lib/egl/libGLES_android.so 11-11 22:53:39.337: D/libEGL(17856): loaded /system/lib/egl/libEGL_adreno200.so 11-11 22:53:39.357: D/libEGL(17856): loaded /system/lib/egl/libGLESv1_CM_adreno200.so 11-11 22:53:39.357: D/libEGL(17856): loaded /system/lib/egl/libGLESv2_adreno200.so 11-11 22:53:39.387: I/Adreno200-EGLSUB(17856): <ConfigWindowMatch:2078>: Format RGBA_8888. 11-11 22:53:39.407: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x5c66d000 size:36593664 offset:32825344 fd:65 11-11 22:53:39.417: E/(17856): Can't open file for reading 11-11 22:53:39.417: E/(17856): Can't open file for reading 11-11 22:53:39.417: D/OpenGLRenderer(17856): Enabling debug mode 0 11-11 22:53:39.477: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x5ecd3000 size:40361984 offset:36593664 fd:68 11-11 22:53:40.507: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x61451000 size:7254016 offset:3485696 fd:71 11-11 22:53:41.077: I/Adreno200-EGLSUB(17856): <ConfigWindowMatch:2078>: Format RGBA_8888. 11-11 22:53:41.077: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x61c4c000 size:7725056 offset:7254016 fd:74 11-11 22:53:41.097: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x623aa000 size:8196096 offset:7725056 fd:80 11-11 22:53:41.937: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x62b7b000 size:8667136 offset:8196096 fd:83 11-11 22:53:41.977: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x61c4c000 size:7725056 offset:7254016 11-11 22:53:41.977: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x623aa000 size:8196096 offset:7725056 11-11 22:53:41.977: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x62b7b000 size:8667136 offset:8196096 11-11 22:53:42.167: I/Adreno200-EGLSUB(17856): <ConfigWindowMatch:2078>: Format RGBA_8888. 11-11 22:53:42.177: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x61c5d000 size:17084416 offset:13316096 fd:74 11-11 22:53:42.317: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x63853000 size:20852736 offset:17084416 fd:80 11-11 22:53:42.357: D/OpenGLRenderer(17856): Flushing caches (mode 0) 11-11 22:53:42.357: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x5c66d000 size:36593664 offset:32825344 11-11 22:53:42.357: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x5ecd3000 size:40361984 offset:36593664 11-11 22:53:42.367: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x61451000 size:7254016 offset:3485696 11-11 22:53:42.757: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x5c56d000 size:24621056 offset:20852736 fd:65 11-11 22:53:44.247: D/AndroidRuntime(17856): Shutting down VM 11-11 22:53:44.247: W/dalvikvm(17856): threadid=1: thread exiting with uncaught exception (group=0x40ac3210) 11-11 22:53:44.257: E/AndroidRuntime(17856): FATAL EXCEPTION: main 11-11 22:53:44.257: E/AndroidRuntime(17856): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{niall.shannon.timetable/niall.shannon.timetable.menu}: java.lang.NullPointerException 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1891) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1992) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.access$600(ActivityThread.java:127) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1158) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.os.Handler.dispatchMessage(Handler.java:99) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.os.Looper.loop(Looper.java:137) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.main(ActivityThread.java:4441) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.reflect.Method.invokeNative(Native Method) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.reflect.Method.invoke(Method.java:511) 11-11 22:53:44.257: E/AndroidRuntime(17856): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:823) 11-11 22:53:44.257: E/AndroidRuntime(17856): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:590) 11-11 22:53:44.257: E/AndroidRuntime(17856): at dalvik.system.NativeStart.main(Native Method) 11-11 22:53:44.257: E/AndroidRuntime(17856): Caused by: java.lang.NullPointerException 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:221) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:157) 11-11 22:53:44.257: E/AndroidRuntime(17856): at niall.shannon.timetable.DBAdapter.getClasses(DBAdapter.java:151) 11-11 22:53:44.257: E/AndroidRuntime(17856): at niall.shannon.timetable.menu.<init>(menu.java:15) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.Class.newInstanceImpl(Native Method) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.Class.newInstance(Class.java:1319) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.Instrumentation.newActivity(Instrumentation.java:1023) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1882) 11-11 22:53:44.257: E/AndroidRuntime(17856): ... 11 more 11-11 22:53:46.527: I/Process(17856): Sending signal. PID: 17856 SIG: 9

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • C++ 2d Array Class Function Call Help

    - by johnny-conrad
    I hope this question takes a simple fix, and I am just missing something very small. I am in my second semester of C++ in college, and we are just getting into OOP. This is my first OOP program, and it is causing me a little problem. Here are the errors I am getting: Member function must be called or its address taken in function displayGrid(int,Cell ( *)[20]) Member function must be called or its address taken in function Turn(int,int,Cell ( *)[20]) Member function must be called or its address taken in function Turn(int,int,Cell ( *)[20]) Warning: Parameter 'grid' is never used in function displayGrid(int,Cell ( *)[20]) Here is all of my code. I am aware It is much more code than necessary, sorry if it makes it more difficult. I was worried that I might accidentally delete something. const int MAX=20; //Struct Cell holds player and their symbol. class Cell { private: int Player; char Symbol; public: Cell(void); void setPlayer(int); void setSymbol(char); int getPlayer(void); char getSymbol(void); }; Cell::Cell(void) { Symbol ='-'; } void Cell::setPlayer(int player_num) { Player = player_num; } void Cell::setSymbol(char rps) { Symbol = rps; } int Cell::getPlayer(void) { return Player; } char Cell::getSymbol(void) { return Symbol; } void Turn(int, int, Cell[MAX][MAX]); void displayGrid(int, Cell[MAX][MAX]); void main(void) { int size; cout << "How big would you like the grid to be: "; cin >> size; //Checks to see valid grid size while(size>MAX || size<3) { cout << "Grid size must between 20 and 3." << endl; cout << "Please re-enter the grid size: "; cin >> size; } int cnt=1; int full; Cell grid[MAX][MAX]; //I use full to detect when the game is over by squaring size. full = size*size; cout << "Starting a new game." << endl; //Exits loop when cnt reaches full. while(cnt<full+1) { displayGrid(size, grid); //calls function to display grid if(cnt%2==0) //if cnt is even then it's 2nd players turn cout << "Player 2's turn." << endl; else cout << "Player 1's turn" << endl; Turn(size, cnt, grid); //calls Turn do each players turn cnt++; } cout << endl; cout << "Board is full... Game Over" << endl; } void displayGrid(int size, Cell grid[MAX][MAX]) { cout << endl; cout << " "; for(int x=1; x<size+1; x++) // prints first row cout << setw(3) << x; // of numbers. cout << endl; //Nested-For prints the grid. for(int i=1; i<size+1; i++) { cout << setw(2) << i; for(int c=1; c<size+1; c++) { cout << setw(3) << grid[i][c].getSymbol; } cout << endl; } cout << endl; } void Turn(int size, int cnt, Cell grid[MAX][MAX]) { char temp; char choice; int row=0; int column=0; cout << "Enter the Row: "; cin >> row; cout << "Enter the Column: "; cin >> column; //puts what is in the current cell in "temp" temp = grid[row][column].getSymbol; //Checks to see if temp is occupied or not while(temp!='-') { cout << "Cell space is Occupied..." << endl; cout << "Enter the Row: "; cin >> row; cout << "Enter the Column: "; cin >> column; temp = grid[row][column].getSymbol; //exits loop when finally correct } if(cnt%2==0) //if cnt is even then its player 2's turn { cout << "Enter your Symbol R, P, or S (Capitals): "; cin >> choice; grid[row][column].setPlayer(1); in >> choice; } //officially sets choice to grid cell grid[row][column].setSymbol(choice); } else //if cnt is odd then its player 1's turn { cout << "Enter your Symbol r, p, or s (Lower-Case): "; cin >> choice; grid[row][column].setPlayer(2); //checks for valid input by user1 while(choice!= 'r' && choice!='p' && choice!='s') { cout << "Invalid Symbol... Please Re-Enter: "; cin >> choice; } //officially sets choice to grid cell. grid[row][column].setSymbol(choice); } cout << endl; } Thanks alot for your help!

    Read the article

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • Abstract Data Type: Any1 can help me this? thanks..

    - by Aga Hibaya
    Objectives: Implement the Abstract Data Type (ADT) List using dynamically allocated arrays and structures. Description A LIST is an ordered collection of items where items may be inserted anywhere in the list. Implement a LIST using an array as follows: struct list { int *items; // pointer to the array int size; // actual size of the array int count; // number of items in the array }; typedef struct list *List; // pointer to the structure Implement the following functions: a) List newList(int size); - will create a new List and return its pointer. Allocate space for the structure, allocate space for the array, then initialize size and count, return the pointer. b) void isEmpty(List list); c) void display(List list); d) int contains(List list, int item); e) void remove(List list, int i) ; f) void insertAfter(List list,int item, int i); g) void addEnd(List list,int item) - add item at the end of the list – simply store the data at position count, then increment count. If the array is full, allocate an array twice as big as the original. count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 addEnd(list,40) will result to count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 40 h) void addFront(List list,int item) - shift all elements to the right so that the item can be placed at position 0, then increment count. Bonus: if the array is full, allocate an array twice as big as the original. count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 addFront(list,40) will result to count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 10 15 20 30 i) void removeFront(List list) - shift all elements to the left and decrement count; count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 10 15 20 30 removeFront(list) will result to count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 5 10 15 20 30 j) void remove(List list,int item) - get the index of the item in the list and then shift all elements to the count = 6 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 10 15 20 30 remove(list,10) will result to count = 5 size = 10 0 1 2 3 4 5 6 7 8 9 40 5 15 20 30 Remarks

    Read the article

  • Change the colour of ablines on ggplot

    - by Sarah
    Using this data I am fitting a plot: p <- ggplot(dat, aes(x=log(Explan), y=Response)) + geom_point(aes(group=Area, colour=Area))+ geom_abline(slope=-0.062712, intercept=0.165886)+ geom_abline(slope= -0.052300, intercept=-0.038691)+ scale_x_continuous("log(Mass) (g)")+ theme(axis.title.y=element_text(size=rel(1.2),vjust=0.2), axis.title.x=element_text(size=rel(1.2),vjust=0.2), axis.text.x=element_text(size=rel(1.3)), axis.text.y=element_text(size=rel(1.3)), text = element_text(size=13)) + scale_colour_brewer(palette="Set1") The two ablines represent the phylogenetically adjusted relationships for each Area trend. I am wondering, is it possible to get the ablines in the same colour palette as their appropriate area data? The first specified is for Area A, the second for Area B. I used: g <- ggplot_build(p) to find out that the first colour is #E41A1C and the second is #377EB8, however when I try to use aes within the +geom_abline command to specify these colours i.e. p <- ggplot(dat, aes(x=log(Explan), y=Response)) + geom_point(aes(group=Area, colour=Area))+ geom_abline(slope=-0.062712, intercept=0.165886,aes(colour='#E41A1C'))+ geom_abline(slope= -0.052300, intercept=-0.038691,aes(colour=#377EB8))+ scale_x_continuous("log(Mass) (g)")+ theme(axis.title.y=element_text(size=rel(1.2),vjust=0.2), axis.title.x=element_text(size=rel(1.2),vjust=0.2), axis.text.x=element_text(size=rel(1.3)), axis.text.y=element_text(size=rel(1.3)), text = element_text(size=13)) + scale_colour_brewer(palette="Set1") It changes the colour of the points and adds to the legend, which I don't want to do. Any advice would be much appreciated!

    Read the article

  • C++ Operator overloading - 'recreating the Vector'

    - by Wallter
    I am currently in a collage second level programing course... We are working on operator overloading... to do this we are to rebuild the vector class... I was building the class and found that most of it is based on the [] operator. When I was trying to implement the + operator I run into a weird error that my professor has not seen before (apparently since the class switched IDE's from MinGW to VS express...) (I am using Visual Studio Express 2008 C++ edition...) Vector.h #include <string> #include <iostream> using namespace std; #ifndef _VECTOR_H #define _VECTOR_H const int DEFAULT_VECTOR_SIZE = 5; class Vector { private: int * data; int size; int comp; public: inline Vector (int Comp = 5,int Size = 0) : comp(Comp), size(Size) { if (comp > 0) { data = new int [comp]; } else { data = new int [DEFAULT_VECTOR_SIZE]; comp = DEFAULT_VECTOR_SIZE; } } int size_ () const { return size; } int comp_ () const { return comp; } bool push_back (int); bool push_front (int); void expand (); void expand (int); void clear (); const string at (int); int operator[ ](int); Vector& operator+ (Vector&); Vector& operator- (const Vector&); bool operator== (const Vector&); bool operator!= (const Vector&); ~Vector() { delete [] data; } }; ostream& operator<< (ostream&, const Vector&); #endif Vector.cpp #include <iostream> #include <string> #include "Vector.h" using namespace std; const string Vector::at(int i) { this[i]; } void Vector::expand() { expand(size); } void Vector::expand(int n ) { int * newdata = new int [comp * 2]; if (*data != NULL) { for (int i = 0; i <= (comp); i++) { newdata[i] = data[i]; } newdata -= comp; comp += n; delete [] data; *data = *newdata; } else if ( *data == NULL || comp == 0) { data = new int [DEFAULT_VECTOR_SIZE]; comp = DEFAULT_VECTOR_SIZE; size = 0; } } bool Vector::push_back(int n) { if (comp = 0) { expand(); } for (int k = 0; k != 2; k++) { if ( size != comp ){ data[size] = n; size++; return true; } else { expand(); } } return false; } void Vector::clear() { delete [] data; comp = 0; size = 0; } int Vector::operator[] (int place) { return (data[place]); } Vector& Vector::operator+ (Vector& n) { int temp_int = 0; if (size > n.size_() || size == n.size_()) { temp_int = size; } else if (size < n.size_()) { temp_int = n.size_(); } Vector newone(temp_int); int temp_2_int = 0; for ( int j = 0; j <= temp_int && j <= n.size_() && j <= size; j++) { temp_2_int = n[j] + data[j]; newone[j] = temp_2_int; } //////////////////////////////////////////////////////////// return newone; //////////////////////////////////////////////////////////// } ostream& operator<< (ostream& out, const Vector& n) { for (int i = 0; i <= n.size_(); i++) { //////////////////////////////////////////////////////////// out << n[i] << " "; //////////////////////////////////////////////////////////// } return out; } Errors: out << n[i] << " "; error C2678: binary '[' : no operator found which takes a left-hand operand of type 'const Vector' (or there is no acceptable conversion) return newone; error C2106: '=' : left operand must be l-value As stated above, I am a student going into Computer Science as my selected major I would appreciate tips, pointers, and better ways to do stuff :D

    Read the article

  • Converting OpenGL co-ordinates to lower UIView (and UIImagePickerController)

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); NSString *path = [[NSBundle mainBundle] pathForResource:@"plane" ofType:@"obj"]; OpenGLWaveFrontObject *theObject = [[OpenGLWaveFrontObject alloc] initWithPath:path]; Vertex3D position; position.z = -8.0; position.y = 3.0; position.x = 2.0; theObject.currentPosition = position; self.plane = theObject; [theObject release]; } (void)drawView:(GLView*)view; { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // the coordinates of the rectangle are // myRect.x, myRect.y, myRect.width, myRect.height // Do I need to use a matrix to translate the currentPosition of the // 3-D model so it is drawn within myRect? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; }

    Read the article

  • external USB HD doesn't show up on 'sudo fdisk -l' on one Ubuntu, shows up on another (both 'precise')

    - by Menelaos
    I have a 1.5TB WD external USB HDD and two Ubuntu systems, both 'precise'. When I plug the disk on system A it doesn't show up on the output of sudo fdisk -l nor is it automatically mounted (note that that wasn't always the case - it used to appear in the past). When I plug it on system B (again Ubuntu precise) it shows up when I do a sudo fdisk -l (output appended at the end) and mounts automatically just fine. What does this discrepancy point to and what kind of diagnostics should I run / tools I should use to troubleshoot the problem? I followed the suggestion I received to do a sudo tail -f /var/log/syslog and the output is the following when I plug, unplug and replug the USB cable on System A: Sep 14 23:27:09 thorin mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:27:09 thorin mtp-probe: bus: 2, device: 3 was not an MTP device Sep 14 23:28:01 thorin kernel: [ 338.994295] usb 2-2: USB disconnect, device number 3 Sep 14 23:28:04 thorin kernel: [ 341.808139] usb 2-2: new high-speed USB device number 4 using ehci_hcd Sep 14 23:28:04 thorin mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:28:04 thorin mtp-probe: bus: 2, device: 4 was not an MTP device Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting due to inactivity Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting was requested (I guess the last two message are irrelevant). Output of sudo fdisk -l on System B $ sudo fdisk -l Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00070db4 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2905114623 1452556288 83 Linux /dev/sda2 2905116670 2930276351 12579841 5 Extended /dev/sda5 2905116672 2930276351 12579840 82 Linux swap / Solaris Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c849c84 Device Boot Start End Blocks Id System /dev/sdb1 * 63 488375999 244187968+ 7 HPFS/NTFS/exFAT Disk /dev/sdg: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003e17f Device Boot Start End Blocks Id System /dev/sdg1 2048 2930272255 1465135104 7 HPFS/NTFS/exFAT

    Read the article

  • Javascript Converter Coding Error ~ Showing Bug

    - by olivia
    Please help~! <HTML> <HEAD> <TITLE>Bra Size to Chest Size Converter - CM</TITLE> <SCRIPT LANGUAGE="JavaScript"> function CalculateSum(Atext, Btext, form) { var A = BratoNum(Btext); var B = parseFloat(CuptoNum(Btext)); form.Answer.value = A + B; } function ClearForm(form) { form.input_A.value = ""; form.input_B.value = ""; form.Answer.value = ""; } function BratoNum(str) { switch(str.toUpperCase()) { case "32": return 70; case "34": return 75; case "36": return 80; case "38": return 85; case "40": return 90; default: alert('You must enter a number between 32 and 40!'); return 'X'; } } function CuptoNum(str) { switch(str.toUpperCase()) { case "A": return 4; case "B": return 5; case "C": return 6; case "D": return 7; case "E": return 8; case "F": return 9; default: alert('You must enter a letter between A and F!'); return 'X'; } } // end of JavaScript functions --> </SCRIPT> </HEAD> <BODY> <P><FONT SIZE="+2">Bra Size to Chest Size Converter</FONT></P> <FORM NAME="Calculator" METHOD="post"> <P>Enter Bra Size: <INPUT TYPE=TEXT NAME="input_A" SIZE=8></P> <P>Enter Cup Size: <INPUT TYPE=TEXT NAME="input_B" SIZE=8></P> <P><INPUT TYPE="button" VALUE="Get Chest Size" name="AddButton" onClick="CalculateSum(this.form.input_A.value, this.form.input_B.value, this.form)"></P> <P>Your Chest Size is <INPUT TYPE=TEXT NAME="Answer" SIZE=8> inch</P> <P><INPUT TYPE="button" VALUE="Clear" name="ClearButton" onClick="ClearForm(this.form)"></P> </FORM> </BODY> </HTML>

    Read the article

  • Why am I getting an IndexOutOfBoundsException here?

    - by Berzerker
    I'm getting an index out of bounds exception thrown and I don't know why, within my replaceValue method below. [null, (10,4), (52,3), (39,9), (78,7), (63,8), (42,2), (50,411)] replacement value test:411 size=7 [null, (10,4), (52,3), (39,9), (78,7), (63,8), (42,2), (50,101)] removal test of :(10,4) [null, (39,9), (52,3), (42,2), (78,7), (63,8), (50,101)] size=6 I try to replace the value again here and get an error... package heappriorityqueue; import java.util.*; public class HeapPriorityQueue<K,V> { protected ArrayList<Entry<K,V>> heap; protected Comparator<K> comp; int size = 0; protected static class MyEntry<K,V> implements Entry<K,V> { protected K key; protected V value; protected int loc; public MyEntry(K k, V v,int i) {key = k; value = v;loc =i;} public K getKey() {return key;} public V getValue() {return value;} public int getLoc(){return loc;} public String toString() {return "(" + key + "," + value + ")";} void setKey(K k1) {key = k1;} void setValue(V v1) {value = v1;} public void setLoc(int i) {loc = i;} } public HeapPriorityQueue() { heap = new ArrayList<Entry<K,V>>(); heap.add(0,null); comp = new DefaultComparator<K>(); } public HeapPriorityQueue(Comparator<K> c) { heap = new ArrayList<Entry<K,V>>(); heap.add(0,null); comp = c; } public int size() {return size;} public boolean isEmpty() {return size == 0; } public Entry<K,V> min() throws EmptyPriorityQueueException { if (isEmpty()) throw new EmptyPriorityQueueException("Priority Queue is Empty"); return heap.get(1); } public Entry<K,V> insert(K k, V v) { size++; Entry<K,V> entry = new MyEntry<K,V>(k,v,size); heap.add(size,entry); upHeap(size); return entry; } public Entry<K,V> removeMin() throws EmptyPriorityQueueException { if (isEmpty()) throw new EmptyPriorityQueueException("Priority Queue is Empty"); if (size == 1) return heap.remove(1); Entry<K,V> min = heap.get(1); heap.set(1, heap.remove(size)); size--; downHeap(1); return min; } public V replaceValue(Entry<K,V> e, V v) throws InvalidEntryException, EmptyPriorityQueueException { // replace the value field of entry e in the priority // queue with the given value v, and return the old value This is where I am getting the IndexOutOfBounds exception, on heap.get(i); if (isEmpty()){ throw new EmptyPriorityQueueException("Priority Queue is Empty."); } checkEntry(e); int i = e.getLoc(); Entry<K,V> entry=heap.get(((i))); V oldVal = entry.getValue(); K key=entry.getKey(); Entry<K,V> insert = new MyEntry<K,V>(key,v,i); heap.set(i, insert); return oldVal; } public K replaceKey(Entry<K,V> e, K k) throws InvalidEntryException, EmptyPriorityQueueException, InvalidKeyException { // replace the key field of entry e in the priority // queue with the given key k, and return the old key if (isEmpty()){ throw new EmptyPriorityQueueException("Priority Queue is Empty."); } checkKey(k); checkEntry(e); K oldKey=e.getKey(); int i = e.getLoc(); Entry<K,V> entry = new MyEntry<K,V>(k,e.getValue(),i); heap.set(i,entry); downHeap(i); upHeap(i); return oldKey; } public Entry<K,V> remove(Entry<K,V> e) throws InvalidEntryException, EmptyPriorityQueueException{ // remove entry e from priority queue and return it if (isEmpty()){ throw new EmptyPriorityQueueException("Priority Queue is Empty."); } MyEntry<K,V> entry = checkEntry(e); if (size==1){ return heap.remove(size--); } int i = e.getLoc(); heap.set(i, heap.remove(size--)); downHeap(i); return entry; } protected void upHeap(int i) { while (i > 1) { if (comp.compare(heap.get(i/2).getKey(), heap.get(i).getKey()) <= 0) break; swap(i/2,i); i = i/2; } } protected void downHeap(int i) { int smallerChild; while (size >= 2*i) { smallerChild = 2*i; if ( size >= 2*i + 1) if (comp.compare(heap.get(2*i + 1).getKey(), heap.get(2*i).getKey()) < 0) smallerChild = 2*i+1; if (comp.compare(heap.get(i).getKey(), heap.get(smallerChild).getKey()) <= 0) break; swap(i, smallerChild); i = smallerChild; } } protected void swap(int j, int i) { heap.get(j).setLoc(i); heap.get(i).setLoc(j); Entry<K,V> temp; temp = heap.get(j); heap.set(j, heap.get(i)); heap.set(i, temp); } public String toString() { return heap.toString(); } protected MyEntry<K,V> checkEntry(Entry<K,V> ent) throws InvalidEntryException { if(ent == null || !(ent instanceof MyEntry)) throw new InvalidEntryException("Invalid entry."); return (MyEntry)ent; } protected void checkKey(K key) throws InvalidKeyException{ try{comp.compare(key,key);} catch(Exception e){throw new InvalidKeyException("Invalid key.");} } }

    Read the article

  • Is the size of a struct required to be an exact multiple of the alignment of that struct?

    - by Steve314
    Once again, I'm questioning a longstanding belief. Until today, I believed that the alignment of the following struct would normally be 4 and the size would normally be 5... struct example { int m_Assume_32_Bits; char m_Assume_8_Bit_Bytes; }; Because of this assumption, I have data structure code that uses offsetof to determine the distance in bytes between two adjacent items in an array. Today, I spotted some old code that was using sizeof where it shouldn't, couldn't understand why I hadn't had bugs from it, coded up a unit test - and the test surprised me by passing. A bit of investigation showed that the sizeof the type I used for the test (similar to the struct above) was an exact multiple of the alignment - ie 8 bytes. It had padding after the final member. Here is an example of why I never expected this... struct example2 { example m_Example; char m_Why_Cant_This_Be_At_Offset_6_Bytes; }; A bit of Googling showed examples that make it clear that this padding after the final member is allowed - for example http://en.wikipedia.org/wiki/Data_structure_alignment#Data_structure_padding (the "or at the end of the structure" bit). This is a bit embarrassing, as I recently posted this comment - Use of struct padding (my first comment to that answer). What I can't seem to determine is whether this padding to an exact multiple of the alignment is guaranteed by the C++ standard, or whether it is just something that is permitted and that some (but maybe not all) compilers do. So - is the size of a struct required to be an exact multiple of the alignment of that struct according to the C++ standard? If the C standard makes different guarantees, I'm interested in that too, but the focus is on C++.

    Read the article

  • VirtualBox limits size of .js file, that can be included from guest additions folder?

    - by c69
    This question might belong to SuperUser, but i'll try to ask it here anyway, because i believe, some web developers might encountered this weird behavior. When testing a site for IE8/winXP compatibility on VirtualBox i run into weird issue of $ is undefined, which is caused by jQuery (and jQuery UI) being not included, when referenced by relative path, which resolves to file:/// url. Seemingly because their size was too big (above 200KB). Simply replacing links to those 2 big files to http:// ones solved the issue for me. But here is the question: why did this happen ? is it a misconfiguration ? a bug ? a known design decision ? Details: VirtualBox 4.1.8 host os: win7 64bit, guest os: xp sp3 32 bit guest additions installed, page was launched from VB shared folder the bug was manifesting itself in all browsers (even in opera, which ignores ie security settings, afaik) ie configuration is default script was included like this: <script type="text/javascript" src="js/libs/jquery/jquery-1.7.2.js"> exact size limit was not deducted.

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >