Search Results

Search found 29837 results on 1194 pages for 'number to word'.

Page 469/1194 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Multiple IntenseDebate Comment Counts

    - by Aristotle
    I just setup IntenseDebate on my blog this evening and am, for the most part, pleased with it. One thing I did see is that they offered me a small snippet to show the current number of comments: <script> var idcomments_acct = 'abcdefgef12345678mykey8675309acdc'; var idcomments_post_id; var idcomments_post_url; </script> <script type="text/javascript" src="http://www.intensedebate.com/js/genericLinkWrapperV2.js"></script> This is nice, but what I would like to do is have something similar on my archives page where many posts are listed - not just one. Presently the page looks like this: Some Post TitleAuthor NameShort abstract from this post... Some Post TitleAuthor NameShort abstract from this post... I would like it to look like this: Some Post TitleAuthor NameShort abstract from this post...7 Comments Some Post TitleAuthor NameShort abstract from this post...3 Comments But I'm not exactly sure how I can do this with IntenseDebate. Do they offer any sort of method to gather the total number of comments for multiple pages from a single page?

    Read the article

  • de-assign alt + right arrow

    - by jcollum
    I'm trying to map View.NavigateBackward and View.NavigateBackward like so: View.NavigateBackward = Alt + LeftArrow View.NavigateForward = Alt + RightArrow Pretty simple to do in Visual Studio with the Keyboard Options dialog. OK so I've assigned the shortcuts and the NavigateBackward one is working. But NavigateForward, which used to be assigned to Edit.CompleteWord, is staying with its old assignment. I've checked that Edit.CompleteWord is assigned to 'Ctrl+K, W' but the Alt+RightArrow is still behaving as complete word. Is there something special about the arrow keys that I can't assign them? I want to do this so the mouse buttons behave the same in VS 2010 and my web browser. Works fine for the back button, but the forward button won't re-assign properly. Suggestions?

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Why are my connections not closed even if I explicitly dispose of the DataContext?

    - by Chris Simpson
    I encapsulate my linq to sql calls in a repository class which is instantiated in the constructor of my overloaded controller. The constructor of my repository class creates the data context so that for the life of the page load, only one data context is used. In my destructor of the repository class I explicitly call the dispose of the DataContext though I do not believe this is necessary. Using performance monitor, if I watch my User Connections count and repeatedly load a page, the number increases once per page load. Connections do not get closed or reused (for about 20 minutes). I tried putting Pooling=false in my config to see if this had any effect but it did not. In any case with pooling I wouldn't expect a new connection for every load, I would expect it to reuse connections. I've tried putting a break point in the destructor to make sure the dispose is being hit and sure enough it is. So what's happening? Some code to illustrate what I said above: The controller: public class MyController : Controller { protected MyRepository rep; public MyController () { rep = new MyRepository(); } } The repository: public class MyRepository { protected MyDataContext dc; public MyRepository() { dc = getDC(); } ~MyRepository() { if (dc != null) { //if (dc.Connection.State != System.Data.ConnectionState.Closed) //{ // dc.Connection.Close(); //} dc.Dispose(); } } // etc } Note: I add a number of hints and context information to the DC for auditing purposes. This is essentially why I want one connection per page load

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • How do I extract excel data from multiple worksheets and put into one sheet?

    - by user167210
    In a workbook I have 7 sheets(Totals and then Mon to Sat),I want to extract rows which have the word "CHEQ" in its cell (this is a dropdown list with two options-CHEQ/PAID)from all sheets. On my front sheet I used this formula: =IF(ROWS(A$13:A13)>$C$10,"",INDEX(Monday!A$3:A$62,SMALL(IF(Monday[Paid]=$A$10,ROW(Monday[Paid])-ROW(Monday!$I$3)+1),ROWS(A$13:A13)))) This formula works fine for one worksheet (eg. Monday) but is it possible to show the extracted rows from all 6 sheets on the front page? I only have Excel NOT Access. These are the 12 headers on row A12 Col Name Cod House Car Date Discount 2nd Paid Extra Letter Posted The exported data appears like this (this just an example): Col Name Cod House Car Date Discount 2nd Paid Extra Letter Posted 12 Robbs 1244 Ren 11/10 10% 5 CHEQ 0 0 No 15 Jones 7784 Ren 12/10 15% 1 CHEQ 0 0 No 18 Doese 1184 Ren 12/11 12% 1 CHEQ 0 0 No Any ideas on what to do to this formula? I am using Excel 2010.

    Read the article

  • setBit java method using bit shifting and hexadecimal code - question

    - by somewhat_confused
    I am having trouble understanding what is happening in the two lines with the 0xFF7F and the one below it. There is a link here that explains it to some degree. http://www.herongyang.com/java/Bit-String-Set-Bit-to-Byte-Array.html I don't know if 0xFF7FposBit) & oldByte) & 0x00FF are supposed to be 3 values 'AND'ed together or how this is supposed to be read. If anyone can clarify what is happening here a little better, I would greatly appreciate it. private static void setBit(byte[] data, final int pos, final int val) { int posByte = pos/8; int posBit = pos%8; byte oldByte = data[posByte]; oldByte = (byte) (((0xFF7F>>posBit) & oldByte) & 0x00FF); byte newByte = (byte) ((val<<(8-(posBit+1))) | oldByte); data[posByte] = newByte; } passed into this method as parameters from a selectBits method was setBit(out,i,val); out = is byte[] out = new byte[numOfBytes]; (numOfBytes can be 7 in this situation) i = which is number [57], the original number from the PC1 int array holding the 56-integers. val = which is the bit taken from the byte array from the getBit() method.

    Read the article

  • Can't see *all* databases in a remote SQL Server instance

    - by George
    Yesterday I posted a related question on StackOverflow. This problem involved not being able to see a SQL Server 2008 instance on another PC. I am not sure why adding the port number enabled me to see a SQL Server that I could not otherwise see, since the port number that I specified was, after all, the default port. Now I notice that I have another problem. While I can connect to the remote SQL 2008 Server instance, I cannot see all the databases in the instance. I am trying to connect to the 2008 instance from another PC using SQL Server 2008 Mgt Studio. I am connecting from a Windows 7 Ultimate PC to a Windows XP Pro PC. I suspect that my problem has something to do with not all database in the remote instance having the same version. For example, I "upgraded" a a SL 2005 database to 2008 by doing a backup frm 2005 and importing it into 2008. When I realized that this was not one of the database that I could see from my other PC, I noticed that the compatability level of the imported was still 2005, so I changed it to 2008. Still I could not see the database. I am sure that this is relevant: I just noticed that on my remote server, the sql node instance node, named "sql2008" says "version 10" when I am on the remote serfver, but when I connect to the sql2008 remote instance fron my local PC, the connection is shown locally as being a "SQL Servr version 8.0" instance. I suspect that locally, I am only being shown databases that are somehow in the remote 2008 instance but have not been upgraded. I guess I don't know what constitutes an upgraded database and I don not know who to connect to see all the databases, even if this requires multiple connections from the source PC.

    Read the article

  • Serious voltage and temperature problems

    - by James Willson
    My computer has been acting up when I play games so I wanted to look into why. Issue 1: GPU Temp According to afterburner and speedfan my 8800GTX idles at 90 degrees and then when playing games shoots up to over 110C which is when my graphics basically starts to give rendering issues. Issue 2: CPU Temp Speedfan is saying my CPU temp is 83C idle but when I look at core temp it says core0 is at 35C and core1 is at 33C. Issue 3: Voltages This is what speedfan is saying for my voltages: Vcore1: 1.01V Vcore2: 1.90V +3.3V: 3.31V +5V: 4.95V +12V: 0.51V -12V: -16.80V -5V: -8.43V +5V 5.13V Vbat: 3.25V Vcore: 3.00V +3.3V: 3.20V These voltages, for lack of a better word, look f*cked. With all this happening, the computer runs ok under normal use. Is the software giving out incorrect readouts or instead should I immediately move the computer into another room before it explodes? P.s I would like to add this is a stock system. EVGA 8800GTX, E6850 CPU, 800W PSU

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • What is the meaining of "deassert" in this context?

    - by Sam.Rueby
    The English majors over at Dell provided me with this error message provided by a PowerEdge 2950. CPU2 Status: Processor sensors for CPU2, IERR was deasserted I've Googled it, random forum posts aren't providing me with a clear answer. It's also apparently not a word: http://dictionary.reference.com/browse/deassert?s=t I can guess the meaning. Assert: to state with assurance, confidence, or force Okay. So the negative of that. The state of lack-of-confidence? What is this error message trying to tell me? Memory errors were grouped with this one: is it trying to say that IERR for CPU2 should be set, but is not? That the current system state is SNAFU but CPU2 sees everything as fine?

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • How to add a new Stage to my default stage?

    - by Raigomaru
    I want to add a new Stage called field to the default stage (i need to place different elements on it later). And then i want to add a bitmap called myBitmap to the field. But nothing happens. I don't understand what should i do... var field:Stage = new Stage(); field.x = 200; field.y = 200; field.width = 300; field.height = 300; stage.addChild(field); var bdWidth:Number = 100; var bdHeight:Number = 100; var bdTransparent:Boolean = true; var bdFillColorARGB:uint = 0xFF007090; var myBitmapData:BitmapData = new BitmapData(bdWidth, bdHeight, bdTransparent, bdFillColorARGB); var myBitmap:Bitmap = new Bitmap(myBitmapData); myBitmap.x = 10; myBitmap.y = 10; field.addChild(myBitmap);

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • Pairs from single list

    - by Apalala
    Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, "better" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest().

    Read the article

  • OS X Automator empty, blank or null value.

    - by Brian
    I have some data files mostly excel, word and pdf files most of the files have no extension on them. So they are missing the .doc .xls. This data needs to be used in a Windows environment now. I have created automator apps for each of the file types I want to add the ext onto. The problem is it also adds the extension to files that already have an extension. So data.xls becomes data.xls.xls I would like to figure a way to only add the extenion to the files without extension. How do I tell the finder filter that i only want it to return files without extensions. I see how to add a line to filter by extension but I don't know how to let it know I want only blank or null or files without any extensions. Thanks

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • What did Emolator do with My Laptop?

    - by Garry
    I played SEGA with my KEGA.exe (Sega Emulator) and it made my right key to be malfunctioned. Befor that day, I had played it, too in my notebook with fullscreen mode, and suddenly my ACER Aspire One notebook restarted during that emulator was running and before the screen was black (boot), my screen was blue with many words but I couldn't read them, but I remember that there was a word like 000000 x 0000000 x 000000 and bla bla bla. And when I played without fullscreen mode, It didn't happened but it made my right key to be malfunctioned until when I went to Bot setup, my right key doesn't work. Do U know what is the problem of my emulator? Can U explain me for that?

    Read the article

  • nike ocr twitter api rsvp software not working

    - by daniel
    I recently purchased an OCR program that reads through Nike's Twitter page and takes a circled word out of a picture and sends it back in a Twitter DM. However, it recently stopped working, probably because the person whom I bought it from deleted the server and hasn't responded to my emails. It is a Twitter API app. It still goes through the tweets and I can enter usernames, but it does not read the image anymore nor send a DM. If anyone knows how I can go about fixing this, it would be a huge help. Here is the program website: http://rsvpocr.nanoworking.com

    Read the article

  • Sort Grid Columns of mixed type in EXTJS Grid

    - by Amit
    Hello, I want to sort the extjs columns, I have the column type as float and from the server side i am getting values which can contain "-" value , now what happens the grid is displaying me the NaN value instead of - and the sort is not working anymore. My requirement is to create a custom sort which can sort first based on number and then sort based on string. Thanks to suggest as renderer also not works for me. My Json String is: {metaData:{"totalProperty":"total", "root":"records","fields":[{"header":"Part Number##false","name":"XJE010^VT-007!0","type":"string"},{"header":"Marketing Status##false","name":"STP716^VT-007!0","type":"string"},{"header":"Package##false","name":"XJE016^VT-007!0","type":"string"},{"header":"Automotive Grade##false","name":"STP472^VT-007!0","type":"string"},{"header":"VDSS##false","name":"XJG810^VT-007!0","type":"float"},{"header":"Drain Current (Dc)(I_D) % (A)##false","name":"XJG273^VT-006!0","type":"float"},{"header":"RDS(on) (@VGS=10V) % (&#937;)##false","name":"XJG640^VT-006!3","type":"float"},{"header":"Features##false","name":"GNP023^VT-007!0","type":"string"},{"header":"RDS(on) (@4.5 or 5V) % (&#937;)##false","name":"XJG640^VT-006!6","type":"float"},{"header":"RDS(on) (@2.7V) % (&#937;)##false","name":"XJG640^VT-006!7","type":"float"},{"header":"RDS(on) (@1.8V) % (&#937;)##false","name":"XJG640^VT-006!8","type":"float"},{"header":"Free Samples##false","name":"STP0881^VT-007!0","type":"string"},{"header":"Total Gate Charge(Qg) typ ()##true","name":"STP049^VT-002!0","type":"float"},{"header":"Total Power Dissipation(PD) % (W)##true","name":"XJG820^VT-006!0","type":"float"}]},"success":"true", "total":13,"records":[{"XJE010^VT-007!0":"STB80PF55$$/cn/analog/product/67164.jsp","STP716^VT-007!0":"Active","XJE016^VT-007!0":"D2PAK","STP472^VT-007!0":"_","XJG810^VT-007!0":"-55","XJG273^VT-006!0":"80","XJG640^VT-006!3":".018","GNP023^VT-007!0":"-","XJG640^VT-006!6":"-","XJG640^VT-006!7":"-","XJG640^VT-006!8":"-","STP0881^VT-007!0":"No","STP049^VT-002!0":"190","XJG820^VT-006!0":"300"},{"XJE010^VT-007!0":"STD10PF06$$/cn/analog/product/64543.jsp","STP716^VT-007!0":"Active","XJE016^VT-007!0":"IPAK TO-251 TO 252 DPAK","STP472^VT-007!0":"_","XJG810^VT-007!0":"-60","XJG273^VT-006!0":"-10","XJG640^VT-006!3":".2","GNP023^VT-007!0":"-","XJG640^VT-006!6":"-","XJG640^VT-006!7":"-","XJG640^VT-006!8":"-","STP0881^VT-007!0":"No ... Regards, Amit

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • Keyboard shortcuts in non-English version of Microsoft Office

    - by Squall
    I have a big problem with the Portuguese version of MS Office 2007 and 2010. The standard shortcuts that any common application uses are changed. Some shortcuts that are not working: Ctrl+S (save), Ctrl+F (find) and Ctrl+A (select all). I want to configure it to use the shortcuts of the English version. There is an option that allow to configure each shortcut separately. Furthermore, I have to configure for each app, if I configure in Word, I will have to configure again for Excel. How to use the shortcuts of the English version of MS Office regardless of the Office language? Thanks

    Read the article

  • Google Chrome custom search engine for secure Wikipedia

    - by gdejohn
    I have this custom search engine set up in Google Chrome: https://encrypted.google.com/search?q=site%3Aen.wikipedia.org+%s&btnI=745 It searches Google for site:en.wikipedia.org {query}, and the btnI=745 is for I'm Feeling Lucky, so it automatically redirects to the first result. I like this better than using Wikipedia's search function directly because it gives me very effective approximate string matching, so I can misspell my search, or leave a word out, or just search for some keywords, and I still get what I'm looking for right away. What I'd like is for it to use Wikipedia's secure gateway: https://secure.wikimedia.org/wikipedia/en/wiki/ It's easy enough to set up a custom search engine that uses the secure version of Wikipedia's search function directly, but I can't figure out how to correctly incorporate it into my version going through Google. Nothing I've tried works.

    Read the article

  • Database localization

    - by Don
    Hi, I have a number of database tables that contain name and description columns which need to be localized. My initial attempt at designing a DB schema that would support this was something like: product ------- id name description local_product ------- id product_id local_name local_description locale_id locale ------ id locale However, this solution requires a new local_ table for every table that contains name and description columns that require localization. In an attempt to avoid this overhead I redesigned the schema so that only a single localization table is needed product ------- id localization_id localization ------- id local_name local_description locale_id locale ------ id locale Here's an example of the data which would be stored in this schema when there are 2 tables (product and country) requiring localization: country id, localization_id ----------------------- 1, 5 product id, localization_id ----------------------- 1, 2 localization id, local_name, local_description, locale_id ------------------------------------------------------ 2, apple, a delicious fruit, 2 2, pomme, un fruit délicieux, 3 2, apfel, ein köstliches Obst, 4 5, ireland, a small country, 2 5, irlande, un petite pay, 3 locale id, locale -------------- 2, en 3, fr 4, de Notice that the compound primary key of the localization table is (id, locale_id), but the foreign key in the product table only refers to the first element of this compound PK. This seems like 'a bad thing' from the POV of normalization. Is there any way I can fix this problem, or alternatively, is there a completely different schema that supports localization without creating a separate table for each localizable table? Update: A number of respondents have proposed a solution that requires creating a separate table for each localizable table. However, this is precisely what I'm trying to avoid. The schema I've proposed above almost solves the problem to my satisfaction, but I'm unhappy about the fact that the localization_id foreign keys only refer to part of the corresponding primary key in the localization table. Thanks, Don

    Read the article

  • cleaning up pdftotext font issues

    - by mankoff
    I'm using pdftotext to make an ASCII version of a PDF document (made with LaTeX), because collaborators prefer a simple document in MS word. The plain text version I see looks good, but upon closer inspection the f character seems to be frequently mis-converted depending on what characters follow. For example, fi and fl often seem to become one special character, which I will try to paste here: ? and ?. What is the best way to clean up the output of pdftotext? I am thinking sed might be the right tool, but am not sure how to detect these special characters.

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >