Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 332/1118 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • STDOUT can not return to Screen

    - by rockyurock
    STDOUT can not return to Screen Hello all below is the part of my code, my code enters "if loop" with $value =1 and output of the process "iperf.exe" is getting into my_output.txt. As i am timing out the process after alram(20sec) time,also wanted to capture the output of this process only. then after i want to continue to the command prompt but i am not able to return to the command promt... not only this code itself does not PRINT on the command prompt , rather it is priniting on the my_output.txt file (i am looping this if loop through rest of my code) output.txt ========== inside value loop2 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [160] local 10.232.62.151 port 5001 connected with 10.232.62.151 port 1505 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [160] 0.0- 5.0 sec 2.14 MBytes 3.59 Mbits/sec 0.000 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed Transfer Transfer Starting: Intent { act=android.settings.APN_SETTINGS } ******AUTOMATION COMPLETED****** Looks some problem with reinitializing the STDOUT.. even i tried to use close(STDOUT); but again it did not return to STDOUT could sombbody please help out ?? /rocky CODE:: if($value) { my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "iperf.exe", "iperf.exe -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); $alarm_time = $IPERF_RUN_TIME+2; #20sec print"inside value loop2\n"; sleep $alarm_time; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); } print"inside value loop3\n"; print"clue1\n"; #close(STDOUT); print"clue2\n"; print"inside value loop4\n"; print"one iperf completed\n"; } my $data_file="my_output.txt"; open(ROCK, $data_file)|| die("Could not open file!"); @raw_data=<ROCK>; @COUNT_PS =split(/ /,$raw_data[7]); my $LOOP_COUNT_PS_4 = $COUNT_PS[9]; my $LOOP_COUNT_PS_5 = $COUNT_PS[10]; print "$LOOP_COUNT_PS_4\n"; print "$LOOP_COUNT_PS_5\n"; my $tput_value = "$LOOP_COUNT_PS_4"." $LOOP_COUNT_PS_5"; print "$tput_value"; close(ROCK); print FH1 "\n $count \| $tput_value \n"; regds rakesh

    Read the article

  • SMS war continues, ideas welcome

    - by Pavel Radzivilovsky
    I am trying to make U9 telit modem send SMS messages. I think I handle protocol correctly, at least, I manage to send them, but only under these circumstances: the native application was executed beforehand, and killed by task manager (without giving it a chance to initialize things). It looks like the supplied application is good at doing certain initialization/deinitialization which is critical. I also see the difference between the two states in output of AT+CIND command. When I am trying to do things on my own, it returns zeroes (including signal quality), but when I run the same command after killing the native application, the output looks reasonable. I am out nearly of ideas. I have tried many things, including attempts to spy at modem's COM ports (didn't work). Haven't tried setting windows hooks to see what the application is trying to get thru. Perhaps you have encountered a similar situation?

    Read the article

  • Text goes goes into separate lines when using a percentage padding

    - by Haxed
    Hi, I have a small problem in html forms. When I give the following css, .row .section-1 label {/* Formatting all section-1 labels */ padding: 0 5px 0 10px; margin: 3px 0 0 0; display:block; } This is the output that I get : However when I edit the css to put padding in the form of a percentage: .row .section-1 label {/* Formatting all section-1 labels */ padding: 0 40% 0 80%; margin: 3px 0 0 0; display: block; } This is the output I get : I want to use percentages, however I want the look in the first image. Any ideas?? Thanks

    Read the article

  • Eclipse Class File Metadata

    - by Alex
    In Visual Studio, I can obtain a succinct list of public methods/members exposed in a class for which I do not have the source (i.e. bundled inside a DLL) by pressing F12 (GoToDefinition). Similarly, I am learning the Android API - in Eclipse. Jumping to an Android framework method definition produces decompilation output which is not intuitive to read, and is very verbose. To mimic results like Visual Studio, I am considering several options: How can I format the decompilation output to be 'cleaner' - I have looked through Eclipse's preferences menus and have not found a way to do this. How do I 'add corresponding source files' once Google provides it, so that jumping to definition yields the actual definition? Is there a plugin that does this already? I looked into Jadclipse, but that project has not been updated in several years, and is still a decompiler. Thank you in advance.

    Read the article

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • Table and Image in a single page of QTextDocument in Qt4

    - by liaK
    Hi, I want to display a table and a image side by side. i.e Left side the image and right side the table. I want this because the image is the reference image for the data present in the table. I want that output in pdf. So I am using QTextDocument, QTextCursor and QPrinter to get the output in pdf. So how it is possible to display the image and table in QtextDocument i.e within a single page of the pdf? I am using Qt 4.5.3 and Windows Xp. Any pointers regarding this are welcome.

    Read the article

  • mysql GROUP_CONCAT

    - by user301766
    I want to list all users with their corropsonding user class. Here are simplified versions of my tables CREATE TABLE users ( user_id INT NOT NULL AUTO_INCREMENT, user_class VARCHAR(100), PRIMARY KEY (user_id) ); INSERT INTO users VALUES (1, '1'), (2, '2'), (3, '1,2'); CREATE TABLE classes ( class_id INT NOT NULL AUTO_INCREMENT, class_name VARCHAR(100), PRIMARY KEY (class_id) ); INSERT INTO classes VALUES (1, 'Class 1'), (2, 'Class 2'); And this is the query statement I am trying to use but is only returning the first matching user class and not a concatenated list as hoped. SELECT user_id, GROUP_CONCAT(DISTINCT class_name SEPARATOR ",") AS class_name FROM users, classes WHERE user_class IN (class_id) GROUP BY user_id; Actual Output +---------+------------+ | user_id | class_name | +---------+------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1 | +---------+------------+ Wanted Output +---------+---------------------+ | user_id | class_name | +---------+---------------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1, Class 2 | +---------+---------------------+ Thanks in advance

    Read the article

  • IS NULL doesn't work as expected in SQL Server 2000 with no Service Pack on it

    - by user306825
    The following batch executed on different instances of SQL Server 2000 illustrates the problem. select @@version create table a (a int) create table b (b int) insert into a(a) values (1) insert into a(a) values (2) insert into a(a) values (3) insert into b(b) values (1) insert into b(b) values (2) select * from a left outer join (select 1 as test, b from b) as j on j.b = a.a where j.test IS NULL drop table a drop table b Output 1: Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Developer Edition on Windows NT 6.1 (Build 7600: ) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b ----------- ----------- ----------- (0 row(s) affected) Output 2: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b ----------- ----------- ----------- 3 NULL NULL (1 row(s) affected) If someone encounters the same problem - make sure you have the SP installed!

    Read the article

  • memory problem with metapost

    - by yCalleecharan
    Hi, I'm using gnuplot to plot a graph to the mp format and then I'm converting it to eps via the command: mpost --sprologues=3 -soutputtemplate=\"%j-%c.eps\" myfigu.mp But I don't get the eps output; instead I get this message: This is MetaPost, version 1.208 (kpathsea version 3.5.7dev) (mem=mpost 2009.12.12) 6 MAY 2010 23:16 **myfigu.mp (./myfigu.mp ! MetaPost capacity exceeded, sorry [main memory size=3000000]. _wc-withpen .currentpen.withcolor.currentcolor gpdraw-...ptpath[(EXPR0)]_sms((EXPR1),(EXPR2))_wc .else:_ac.contour.ptpath[(... l.48052 gpdraw(0,517.1a,166.4b) ; If you really absolutely need more capacity, you can ask a wizard to enlarge me. How do I tweak in order to get more memory. The file from which I'm plotting has two columns of 189,200 values each. These values are of type long double (output from a C program). The text file containing these two column values is about 6 MB. Thanks a lot...

    Read the article

  • jquery calendarpicker callback pass querystring

    - by user577318
    Trying to use this CalendarPicker source and docs here: http://bugsvoice.com/applications/bugsVoice/site/test/calendarPickerDemo.jsp I need to be able to pass the date selected as query string variable of "searchdate" and reload page also updating current date for calendarPicker with querystring date on page reload. This is what I have so far: jQuery(document).ready(function() { var calendarPicker = jQuery("#calendarpicker").calendarPicker({ monthNames:["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"], dayNames: ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"], years:0, months:6, days:5, showDayArrows:true, callback:function(cal) { // Simple output to test calendar date change jQuery("#output").html("Selected date: " + cal.currentDate.getFullYear()+"-"+cal.currentDate.getMonth()+"-"+cal.currentDate.getDate() ); // Not working well since it also includes arrows from datepicker as selectors jQuery(".calDay").children().click(function() { window.location.href="mysite.com?searchdate="+cal.currentDate.getFullYear()+"-"+cal.currentDate.getMonth()+"-"+cal.currentDate.getDate(); }); } }); Any help greatly appreciated. Can this be done with ajax? I am attempting to update a table of events by datepicker.

    Read the article

  • Function that copies into byte vector reverses values

    - by xeross
    Hey, I've written a function to copy any variable type into a byte vector, however whenever I insert something it gets inserted in reverse. Here's the code. template <class Type> void Packet::copyToByte(Type input, vector<uint8_t>&output) { copy((uint8_t*) &input, ((uint8_t*) &input) + sizeof(Type), back_inserter(output)); } Now whenever I add for example a uint16_t with the value 0x2f1f it gets inserted as 1f 2f instead of the expected 2f 1f. What am I doing wrong here ? Regards, Xeross

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • IS NULL doesn't work as expected in MSSQL 2000 with no Service Pack on it

    - by user306825
    The following batch executed on different instances of mssql 2000 illustrates the problem. select @@version create table a (a int) create table b (b int) insert into a(a) values (1) insert into a(a) values (2) insert into a(a) values (3) insert into b(b) values (1) insert into b(b) values (2) select * from a left outer join (select 1 as test, b from b) as j on j.b = a.a where j.test IS NULL drop table a drop table b Output 1: Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Developer Edition on Windows NT 6.1 (Build 7600: ) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b (0 row(s) affected) Output 2: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b 3 NULL NULL (1 row(s) affected) If someone encounters the same problem - make sure you have the SP installed!

    Read the article

  • gethostbyname in C

    - by Matic
    I don't know how to write applications in C, but I need a tiny program that does: lh = gethostbyname("localhost"); output = lh->h_name; output variable is to be printed. The above code is used in PHP MongoDB database driver to get the hostname of the computer (hostname is part of an input to generate an unique ID). I'm skeptical that this will return the hostname, so I'd like some proof. Any code examples would be most helpful. Happy day, Matic

    Read the article

  • How to access private static target field in aspect in AspectJ?

    - by LihO
    I have a simple class Main with private static int x and an aspect that should output the old value of x before it is reassigned: public class Main { private static int x; public static void main(String[] args) { foo(7); } public static void foo(int y) { x = y; } } and MonitorX.aj: public aspect MonitorX { before() : set(static int Main.x){ System.out.println(Main.x); } } which doesn't work since I can't access private x using Main.x. I've also tried: before(int t) : set(static int Main.x) && target(t){ System.out.println(t); } which doesn't work either (nothing is outputted, if I try to output string, it seems that the aspect isn't invoked at all). However printing out the new value that is being assigned works: before(int newVal) : set(static int Main.x) && args(newVal){ System.out.println(newVal); } What am I missing?

    Read the article

  • Parsing XML data with Namespaces in PHP

    - by osbmedia
    I'm trying to work with this XML feed that uses namespaces and i'm not able to get past the colon in the tags. Here's how the XML feed looks like: <r25:events pubdate="2010-05-19T13:58:08-04:00"> <r25:event xl:href="event.xml?event_id=328" id="BRJDMzI4" crc="00000022" status="est"> <r25:event_id>328</r25:event_id> <r25:event_name>Testing 09/2005-08/2006</r25:event_name> <r25:alien_uid/> <r25:event_priority>0</r25:event_priority> <r25:event_type_id xl:href="evtype.xml?type_id=105">105</r25:event_type_id> <r25:event_type_name>CABINET</r25:event_type_name> <r25:node_type>C</r25:node_type> <r25:node_type_name>cabinet</r25:node_type_name> <r25:state>1</r25:state> <r25:state_name>Tentative</r25:state_name> <r25:event_locator>2005-AAAAMQ</r25:event_locator> <r25:event_title/> <r25:favorite>F</r25:favorite> <r25:organization_id/> <r25:organization_name/> <r25:parent_id/> <r25:cabinet_id xl:href="event.xml?event_id=328">328</r25:cabinet_id> <r25:cabinet_name>cabinet 09/2005-08/2006</r25:cabinet_name> <r25:start_date>2005-09-01</r25:start_date> <r25:end_date>2006-08-31</r25:end_date> <r25:registration_url/> <r25:last_mod_dt>2008-02-27T14:22:43-05:00</r25:last_mod_dt> <r25:last_mod_user>abc00296004</r25:last_mod_user> </r25:event> </r25:events> And here is what I'm using for code - I'll trying to throw these into a bunch of arrays where I can format the output however I want: <?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://somedomain.com/blah.xml"); curl_setopt ($ch, CURLOPT_HTTPHEADER, Array("Content-Type: text/xml")); curl_setopt($ch, CURLOPT_USERPWD, "username:password"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); $xml = new SimpleXmlElement($output); foreach ($xml->events->event as $entry){ $dc = $entry->children('http://www.collegenet.com/r25'); echo $entry->event_name . "<br />"; echo $entry->event_id . "<br /><br />"; }

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • Problem with running the Nutch command from PHP exec()

    - by Annibigi
    My Nutch directory lies in /home/myserv/nutch/nutch-1.0/ My php applictaion is in the diretcory /home/myserv/www/ Theres a a php file in my /home/myserv/www/ diretcory that runs a exec command to run a nutch command.PHP code is like : $output = exec("bin/nutch all"); When I run the command from the command line I need to be in the "/home/myserv/nutch/nutch-1.0/" directory When i'm trying to run it through the php exec() ,I just can seems to make it execute. I have tried giving the ful path like (below) but nothing works :( $output = exec("/home/myserv/nutch/nutch-1.0/bin/nutch all"); Desperately looking for help

    Read the article

  • Is it possible to nest folders in the Flash Builder bin-debug

    - by ThunderChunky_SF
    I'm trying to setup my bin-debug folder so that the structure looks like this: bin-debug assets img swf main.swf css style.css js swfobject.js index.html I've tried setting the project's output folder to: bin-debug/assets/swf which does get my main.swf where I want it, but then my other source folders get dumped into that swf folder as well. What I would really like is to tell Flash Builder to put my swf into a nested folder and to be able to specify where my build folders' output goes as well. Is this at all possible without resorting to ANT scripts?

    Read the article

  • Return database messages on successful SQL execution when using ADO

    - by peacedog
    I'm working on a legacy VB6 app here at work, and it has been a long time since I've looked at VB6 or ADO. One thing the app does is to executes SQL Tasks and then to output the success/failure into an XML file. If there is an error it inserts the text the task node. What I have been asked to do is try and do the same with the other mundane messages that result from succesfully executed tasks, like (323 row(s) affected). There is no command object being used, it's just an ADODB.Connection object. Here is the gist of the code: Dim sqlStatement As String Set sqlStatement = /* sql for task */ Dim sqlConn As ADODB.Connection Set sqlConn = /* connection magic */ sqlConn.Execute sqlStatement, , adExecuteNoRecords What is the best way for me to capture the non-error messages so I can output them? Or is it even possible?

    Read the article

  • .NET 2d library for circuit diagrams

    - by Dinah
    I want to draw and manipulate logic flow (as opposed to analog) circuit diagrams and I'm trying not to reinvent the wheel. Problems like positioning, line drawing, line crossing and connecting, path finding, rubberband lines, and drag & drop are also identical in flow charts, UML diagrams, or class diagrams so I started by viewing related topics via Google and Stack Overflow. However, the following requirements seemed to keep me from finding anything that quite fit: assuming a chip has an image with input and output stubs, the connecting lines would connect to the chip only in the exact spots of the I/O stubs the color of connecting lines are able to be set as per a chip's output Is there an existing .NET 2d graphics library that would be suitable for drawing circuit diagrams?

    Read the article

  • the problem about different treatment to __VA_ARGS__ when using VS 2008 and GCC

    - by liuliu
    I am trying to identify a problem because of an unusual usage of variadic macros. Here is the hypothetic macro: #define va(c, d, ...) c(d, __VA_ARGS__) #define var(a, b, ...) va(__VA_ARGS__, a, b) var(2, 3, printf, “%d %d %d\n”, 1); For gcc, the preprocessor will output printf("%d %d %d\n", 1, 2, 3) but for VS 2008, the output is printf, “%d %d %d\n”, 1(2, 3); I suspect the difference is caused by the different treatment to VA_ARGS, for gcc, it will first expand the expression to va(printf, "%d %d %d\n", 1, 2, 3), and treat 1, 2, 3 as the VA_ARGS for macro va. But for VS 2008, it will first treat b as VA_ARGS for macro va, and then do the expansion. Which one is correct interpretation for C99 variadic macro? or my usage falls into an undefined behavior?

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >