Search Results

Search found 5885 results on 236 pages for 'finally'.

Page 215/236 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Clearcase - selective merge.

    - by Keshav
    Hi, I have a peculiar Clearcase doubt. I cannot fully describe why I'm doing such a confusing architecture, but I need to do it (thanks to the mistake done by someone long back). Ok, here's a bit of detail: B1 is a contaminated branch where both my group's changes and another group's changes got mixed together so badly that there is no way of finding which code is whose). So the solution proposed is to create a new branch called B2 (at the same level as B1) and put all the unmodified code of the other group on it (The way to do that would be to merge B1 with B2 and then go about removing all changes from it till it becomes original). Then create a CR branch on B1 and keep only my group's newly added files or modified files on that branch. Finally create an integration branch out of B2 and merge the changes from CR branch of B1 to integration branch of B2. So here is what I did: (The use case is where I have dir D where file a, b and c are there. My group ended up modifying file a while b and c are not modified at all). There is a branch B1 on which there are files a, b and c. There is another branch B2. A merge is done from B1 to B2. Now B2 also has a, b and c. At this point both branch B1 and B2 are same. Now I delete file a from branch B2 (rmname). Now B2 has b and c only. I put a label to this branch called Label1. This makes the code with label Label1 as the unmodified code from other group. Now I create a sub branch called CR1 from B1 and delete all the files that are there in B2 branch (i.e b and c) such that it contains only the modified code from original code on it. In my case it is file a. At this point branch B2 with label Label1 has files b and c (those are unmodified code) and branch CR1 coming off B1 has only a (that is modified by us). Now I create another branch called integration branch that comes off B2 Label1. And then I do a merge of CR branch on to that expecting that it will have all three files a, b and c for me. All I'd need to do is to do a version tree view and see who modified what. But the problem I face is that since I had done a rmname of file a on branch B2 earlier to putting Label. The merge does not really take the file a from CR branch. How to I get around that problem. I want to selectively merge. Is it possible? sorry if it is a bad design. I'm not really conversant with Clear case and have limited options and time to clear some one else's mess.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host

    - by xnoor
    i have an update server that sends client updates through TCP port 12000, the sending of a single file is successful only the first time, but after that i get an error message on the server "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host", if i restart the update service on the server, it works again only one, i have normal multithreaded windows service SERVER CODE namespace WSTSAU { public partial class ApplicationUpdater : ServiceBase { private Logger logger = LogManager.GetCurrentClassLogger(); private int _listeningPort; private int _ApplicationReceivingPort; private string _setupFilename; private string _startupPath; public ApplicationUpdater() { InitializeComponent(); } protected override void OnStart(string[] args) { init(); logger.Info("after init"); Thread ListnerThread = new Thread(new ThreadStart(StartListener)); ListnerThread.IsBackground = true; ListnerThread.Start(); logger.Info("after thread start"); } private void init() { _listeningPort = Convert.ToInt16(ConfigurationSettings.AppSettings["ListeningPort"]); _setupFilename = ConfigurationSettings.AppSettings["SetupFilename"]; _startupPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); } private void StartListener() { try { logger.Info("Listening Started"); ThreadPool.SetMinThreads(50, 50); TcpListener listener = new TcpListener(_listeningPort); listener.Start(); while (true) { TcpClient c = listener.AcceptTcpClient(); ThreadPool.QueueUserWorkItem(ProcessReceivedMessage, c); } } catch (Exception ex) { logger.Error(ex.Message); } } void ProcessReceivedMessage(object c) { try { TcpClient tcpClient = c as TcpClient; NetworkStream Networkstream = tcpClient.GetStream(); byte[] _data = new byte[1024]; int _bytesRead = 0; _bytesRead = Networkstream.Read(_data, 0, _data.Length); MessageContainer messageContainer = new MessageContainer(); messageContainer = SerializationManager.XmlFormatterByteArrayToObject(_data, messageContainer) as MessageContainer; switch (messageContainer.messageType) { case MessageType.ApplicationUpdateMessage: ApplicationUpdateMessage appUpdateMessage = new ApplicationUpdateMessage(); appUpdateMessage = SerializationManager.XmlFormatterByteArrayToObject(messageContainer.messageContnet, appUpdateMessage) as ApplicationUpdateMessage; Func<ApplicationUpdateMessage, bool> HandleUpdateRequestMethod = HandleUpdateRequest; IAsyncResult cookie = HandleUpdateRequestMethod.BeginInvoke(appUpdateMessage, null, null); bool WorkerThread = HandleUpdateRequestMethod.EndInvoke(cookie); break; } } catch (Exception ex) { logger.Error(ex.Message); } } private bool HandleUpdateRequest(ApplicationUpdateMessage appUpdateMessage) { try { TcpClient tcpClient = new TcpClient(); NetworkStream networkStream; FileStream fileStream = null; tcpClient.Connect(appUpdateMessage.receiverIpAddress, appUpdateMessage.receiverPortNumber); networkStream = tcpClient.GetStream(); fileStream = new FileStream(_startupPath + "\\" + _setupFilename, FileMode.Open, FileAccess.Read); FileInfo fi = new FileInfo(_startupPath + "\\" + _setupFilename); BinaryReader binFile = new BinaryReader(fileStream); FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage.fileName = fi.Name; fileUpdateMessage.fileSize = fi.Length; MessageContainer messageContainer = new MessageContainer(); messageContainer.messageType = MessageType.FileProperties; messageContainer.messageContnet = SerializationManager.XmlFormatterObjectToByteArray(fileUpdateMessage); byte[] messageByte = SerializationManager.XmlFormatterObjectToByteArray(messageContainer); networkStream.Write(messageByte, 0, messageByte.Length); int bytesSize = 0; byte[] downBuffer = new byte[2048]; while ((bytesSize = fileStream.Read(downBuffer, 0, downBuffer.Length)) > 0) { networkStream.Write(downBuffer, 0, bytesSize); } fileStream.Close(); tcpClient.Close(); networkStream.Close(); return true; } catch (Exception ex) { logger.Info(ex.Message); return false; } finally { } } protected override void OnStop() { } } i have to note something that my windows service (server) is multithreaded.. i hope anyone can help with this

    Read the article

  • Not sure why I'm getting a NullPointerException when creating a Swing component

    - by Alex
    The error occurs when creating the Box object. public void drawBoard(Board board){ for(int row = 0; row < 8; row++){ for(int col = 0; col < 8; col++){ Box box = new Box(board.getSquare(col, row).getColour(), col, row); squarePanel[col][row].add(box); } } Board is given from the Game constructor here (another class): public Game() throws Throwable{ View graphics = new View(); board = new Board(); board.setDefault(); graphics.drawBoard(board); } The Board constructor looks like this: public Board(){ grid = new Square[COLUMNS][ROWS]; for(int row = 0; row < 8; row++){ for(int col = 0; col < 8; col++){ grid[col][row] = new Square(this); } } for(int row = 0; row < 8; row++){ for(int col = 0; col < 4; col++){ int odd = 2*col + 1; int even = 2*col; getSquare(odd, row).setColour(Color.BLACK); getSquare(even, row).setColour(Color.WHITE); } } } And finally the Box class: class Box extends JComponent{ Color boxColour; int col, row; public Box(Color boxColour, int col, int row){ this.boxColour = boxColour; this.col = col; this.row = row; repaint(); } public void paint(Graphics drawBox){ drawBox.setColor(boxColour); drawBox.drawRect(50*col, 50*row, 50, 50); drawBox.fillRect(50*col, 50*row, 50, 50); } } So while looping through the array, it uses the two integers as coordinates to create the Box. The coordinates are referenced and then repaint() is run. The box also gets the colour, using the two integers, from the Square in the Board class. Since the colour is already set, before the drawBoard(board) method is run, that shouldn't be a problem, right? Exception in thread "main" java.lang.NullPointerException at View.drawBoard(View.java:38) at Game.<init>(Game.java:21) at Game.main(Game.java:14) The relevant part of Square import java.awt.Color; public class Square { private Piece piece; private Board board; private Color squareColour; public Square(Board board){ this.board = board; } public void setColour(Color squareColour){ this.squareColour = squareColour; } public Color getColour(){ return squareColour; }

    Read the article

  • Make file Linking issue Undefined symbols for architecture x86_64

    - by user1035839
    I am working on getting a few files to link together using my make file and c++ and am getting the following error when running make. g++ -bind_at_load `pkg-config --cflags opencv` -c -o compute_gist.o compute_gist.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o gist.o gist.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o standalone_image.o standalone_image.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o IplImageConverter.o IplImageConverter.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o GistCalculator.o GistCalculator.cpp g++ -bind_at_load `pkg-config --cflags opencv` `pkg-config --libs opencv` compute_gist.o gist.o standalone_image.o IplImageConverter.o GistCalculator.o -o rungist Undefined symbols for architecture x86_64: "color_gist_scaletab(color_image_t*, int, int, int const*)", referenced from: _main in compute_gist.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make: *** [rungist] Error 1 My makefile is as follows (Note, I don't need opencv bindings yet, but will be coding in opencv later. CXX = g++ CXXFLAGS = -bind_at_load `pkg-config --cflags opencv` LFLAGS = `pkg-config --libs opencv` SRC = \ compute_gist.cpp \ gist.cpp \ standalone_image.cpp \ IplImageConverter.cpp \ GistCalculator.cpp OBJS = $(SRC:.cpp=.o) rungist: $(OBJS) $(CXX) $(CXXFLAGS) $(LFLAGS) $(OBJS) -o $@ all: rungist clean: rm -rf $(OBJS) rungist The method header is located in gist.h float *color_gist_scaletab(color_image_t *src, int nblocks, int n_scale, const int *n_orientations); And the method is defined in gist.cpp float *color_gist_scaletab(color_image_t *src, int w, int n_scale, const int *n_orientation) { And finally the compute_gist.cpp (main file) #include <stdio.h> #include <stdlib.h> #include <string.h> #include "gist.h" static color_image_t *load_ppm(const char *fname) { FILE *f=fopen(fname,"r"); if(!f) { perror("could not open infile"); exit(1); } int width,height,maxval; if(fscanf(f,"P6 %d %d %d",&width,&height,&maxval)!=3 || maxval!=255) { fprintf(stderr,"Error: input not a raw PPM with maxval 255\n"); exit(1); } fgetc(f); /* eat the newline */ color_image_t *im=color_image_new(width,height); int i; for(i=0;i<width*height;i++) { im->c1[i]=fgetc(f); im->c2[i]=fgetc(f); im->c3[i]=fgetc(f); } fclose(f); return im; } static void usage(void) { fprintf(stderr,"compute_gist options... [infilename]\n" "infile is a PPM raw file\n" "options:\n" "[-nblocks nb] use a grid of nb*nb cells (default 4)\n" "[-orientationsPerScale o_1,..,o_n] use n scales and compute o_i orientations for scale i\n" ); exit(1); } int main(int argc,char **args) { const char *infilename="/dev/stdin"; int nblocks=4; int n_scale=3; int orientations_per_scale[50]={8,8,4}; while(*++args) { const char *a=*args; if(!strcmp(a,"-h")) usage(); else if(!strcmp(a,"-nblocks")) { if(!sscanf(*++args,"%d",&nblocks)) { fprintf(stderr,"could not parse %s argument",a); usage(); } } else if(!strcmp(a,"-orientationsPerScale")) { char *c; n_scale=0; for(c=strtok(*++args,",");c;c=strtok(NULL,",")) { if(!sscanf(c,"%d",&orientations_per_scale[n_scale++])) { fprintf(stderr,"could not parse %s argument",a); usage(); } } } else { infilename=a; } } color_image_t *im=load_ppm(infilename); //Here's the method call -> :( float *desc=color_gist_scaletab(im,nblocks,n_scale,orientations_per_scale); int i; int descsize=0; //compute descriptor size for(i=0;i<n_scale;i++) descsize+=nblocks*nblocks*orientations_per_scale[i]; descsize*=3; // color //print descriptor for(i=0;i<descsize;i++) printf("%.4f ",desc[i]); printf("\n"); free(desc); color_image_delete(im); return 0; } Any help would be greatly appreciated. I hope this is enough info. Let me know if I need to add more.

    Read the article

  • Java SWT - placing image buttons on the image background

    - by foma
    I am trying to put buttons with images(gif) on the background which has already been set as an image (shell.setBackgroundImage(image)) and I can't figure out how to remove transparent border around buttons with images. I would be grateful if somebody could give me some tip about this issue. Here is my code: import org.eclipse.swt.widgets.*; import org.eclipse.swt.layout.*; import org.eclipse.swt.*; import org.eclipse.swt.graphics.*; public class Main_page { public static void main(String[] args) { Display display = new Display(); Shell shell = new Shell(display); Image image = new Image(display, "bg.gif"); shell.setBackgroundImage(image); shell.setBackgroundMode(SWT.INHERIT_DEFAULT); shell.setFullScreen(true); Button button = new Button(shell, SWT.PUSH); button.setImage(new Image(display, "button.gif")); RowLayout Layout = new RowLayout(); shell.setLayout(Layout); shell.open(); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); } } Sorceror, thanks for your answer I will definitely look into this article. Maybe I will find my way. So far I have improved my code a little bit. Firstly, I managed to get rid of the gray background noise. Secondly, I finally succeeded in creating the button as I had seen it in the first place. Yet, another obstacle has arisen. When I removed image(button) transparent border it turned out that the button change its mode(from push button to check box). The problem is that I came so close to the thing I was looking for and now I am a little puzzled. If you have some time please give a glance at my code. Here is the code, if you launch it you will see what the problem is(hope you didn't have problems downloading images): import org.eclipse.swt.widgets.*; import org.eclipse.swt.layout.*; import org.eclipse.swt.*; import org.eclipse.swt.graphics.*; public class Main_page { public static void main(String[] args) { Display display = new Display(); Shell shell = new Shell(display); Image image = new Image(display, "bg.gif"); // Launch on a screen 1280x1024 shell.setBackgroundImage(image); shell.setBackgroundMode(SWT.TRANSPARENT); shell.setFullScreen(true); GridLayout gridLayout = new GridLayout(); gridLayout.marginTop = 200; gridLayout.marginLeft = 20; shell.setLayout(gridLayout); // If you replace SWT.PUSH with SWT.COLOR_TITLE_INACTIVE_BACKGROUND // you will see what I am looking for, despite nasty check box Button button = new Button(shell, SWT.PUSH); button.setImage(new Image(display, "moneyfast.gif")); shell.open(); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); }

    Read the article

  • java will this threading setup work or what can i be doing wrong

    - by Erik
    Im a bit unsure and have to get advice. I have the: public class MyApp extends JFrame{ And from there i do; MyServer = new MyServer (this); MyServer.execute(); MyServer is a: public class MyServer extends SwingWorker<String, Object> { MyServer is doing listen_socket.accept() in the doInBackground() and on connection it create a new class Connection implements Runnable { I have the belove DbHelper that are a singleton. It holds an Sqlite connected. Im initiating it in the above MyApp and passing references all the way in to my runnable: class Connection implements Runnable { My question is what will happen if there are two simultaneous read or `write? My thought here was the all methods in the singleton are synchronized and would put all calls in the queue waiting to get a lock on the synchronized method. Will this work or what can i change? public final class DbHelper { private boolean initalized = false; private String HomePath = ""; private File DBFile; private static final String SYSTEM_TABLE = "systemtable"; Connection con = null; private Statement stmt; private static final ContentProviderHelper instance = new ContentProviderHelper (); public static ContentProviderHelper getInstance() { return instance; } private DbHelper () { if (!initalized) { initDB(); initalized = true; } } private void initDB() { DBFile = locateDBFile(); try { Class.forName("org.sqlite.JDBC"); // create a database connection con = DriverManager.getConnection("jdbc:sqlite:J:/workspace/workComputer/user_ptpp"); } catch (SQLException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } } private File locateDBFile() { File f = null; try{ HomePath = System.getProperty("user.dir"); System.out.println("HomePath: " + HomePath); f = new File(HomePath + "/user_ptpp"); if (f.canRead()) return f; else { boolean success = f.createNewFile(); if (success) { System.out.println("File did not exist and was created " + HomePath); // File did not exist and was created } else { System.out.println("File already exists " + HomePath); // File already exists } } } catch (IOException e) { System.out.println("Maybe try a new directory. " + HomePath); //Maybe try a new directory. } return f; } public String getHomePath() { return HomePath; } private synchronized String getDate(){ SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); Date date = new Date(); return dateFormat.format(date); } public synchronized String getSelectedSystemTableColumn( String column) { String query = "select "+ column + " from " + SYSTEM_TABLE ; try { stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); ResultSet rs = stmt.executeQuery(query); while (rs.next()) { String value = rs.getString(column); if(value == null || value == "") return ""; else return value; } } catch (SQLException e ) { e.printStackTrace(); return ""; } finally { } return ""; } }

    Read the article

  • Asus me302c create script crash

    - by wxfred
    State beforehand: So far, only Asus me302c would crash when it creates a script. This device can create renderscript context successfully 06-03 10:12:50.509: V/RenderScript_jni(3144): RS compat mode 06-03 10:12:50.509: V/RenderScript(3144): 0x610bbfc0 Launching thread(s), CPUs 4 06-03 10:12:50.549: D/libEGL(3144): loaded /vendor/lib/egl/libEGL_POWERVR_SGX544_115.so 06-03 10:12:50.559: D/libEGL(3144): loaded /vendor/lib/egl/libGLESv1_CM_POWERVR_SGX544_115.so 06-03 10:12:50.559: D/libEGL(3144): loaded /vendor/lib/egl/libGLESv2_POWERVR_SGX544_115.so 06-03 10:12:50.619: D/OpenGLRenderer(3144): Enabling debug mode 0 06-03 10:12:52.869: D/dalvikvm(3144): Rejecting registerization due to ushr-int/lit8 v4, v7, (#19) When create a script, it crashed. 06-03 09:55:09.859: D/basefilter(26682): ===createScript=== 06-03 09:55:09.869: E/RenderScript(26682): Unable to open shared library (/data/data/xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxx.xxxx//lib/librs.basefilter.so): Cannot load library: soinfo_relocate(linker.cpp:975): cannot locate symbol "_Z3dotDv3_fS_" referenced by "librs.basefilter.so"... 06-03 09:55:09.869: E/RenderScript(26682): Unable to open system shared library (/system/lib/librs.basefilter.so): (null) 06-03 09:55:09.869: D/AndroidRuntime(26682): Shutting down VM 06-03 09:55:09.869: W/dalvikvm(26682): threadid=1: thread exiting with uncaught exception (group=0x418b9e10) 06-03 09:55:09.869: E/AndroidRuntime(26682): FATAL EXCEPTION: main 06-03 09:55:09.869: E/AndroidRuntime(26682): android.support.v8.renderscript.RSRuntimeException: Loading of ScriptC script failed. 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.support.v8.renderscript.ScriptC.<init>(ScriptC.java:69) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.ScriptC_BaseFilter.<init>(ScriptC_BaseFilter.java:41) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.ScriptC_BaseFilter.<init>(ScriptC_BaseFilter.java:35) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.xxxxxx.TeethWhiteningRSFilter.onCreateScript(TeethWhiteningRSFilter.java:19) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.xxxxxx.TeethWhiteningRSFilter.onCreateScript(TeethWhiteningRSFilter.java:1) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.BaseRSFilter.createScript(BaseRSFilter.java:39) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.RSFilterEngine.addFilter(RSFilterEngine.java:76) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxx.xxxx.BeautyActivity.changeBeautyEffect(BeautyActivity.java:277) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxx.xxxx.BeautyActivity$2.onClick(BeautyActivity.java:100) 06-03 09:55:09.869: E/AndroidRuntime(26682): at com.android.internal.app.AlertController$ButtonHandler.handleMessage(AlertController.java:166) 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.os.Handler.dispatchMessage(Handler.java:99) 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.os.Looper.loop(Looper.java:152) 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.app.ActivityThread.main(ActivityThread.java:5132) 06-03 09:55:09.869: E/AndroidRuntime(26682): at java.lang.reflect.Method.invokeNative(Native Method) 06-03 09:55:09.869: E/AndroidRuntime(26682): at java.lang.reflect.Method.invoke(Method.java:511) 06-03 09:55:09.869: E/AndroidRuntime(26682): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) 06-03 09:55:09.869: E/AndroidRuntime(26682): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) 06-03 09:55:09.869: E/AndroidRuntime(26682): at dalvik.system.NativeStart.main(Native Method) Now, I will post some related infomation. project.properties ... renderscript.target=9 renderscript.support.mode=true sdk.buildtools=19.0.3 device info Brand: Asus Model: ME302C OS: 4.2.2 CPU: Intel(R) Atom(TM) CPU Z2560 @1.60GHz GPU Renderer PowerVR SGX 544MP Finally, by the way, the same code runs well on Galaxy s2, s4, note 2.

    Read the article

  • ANT: ways to include libraries and license issues

    - by Eric Tobias
    I have been trying to use Ant to compile and ready a project for distribution. I have encountered several problems along the way that I have been finally able to solve but the solution leaves me very unsatisfied. First, let me explain the set-up of the project and its dependencies. I have a project, lets call it Primary which depends on a couple of libraries such as the fantastic Guava. It also depends on another project of mine, lets call it Secondary. The Secondary project also features some dependencies, for example, JDOM2. I have referenced the Jar I build with Ant in Primary. Let me give you the interesting bits of the build.xml so you can get a picture of what I am doing: <project name="Primary" default="all" basedir="."> <property name='build' location='dist' /> <property name='application.version' value='1.0'/> <property name='application.name' value='Primary'/> <property name='distribution' value='${application.name}-${application.version}'/> <path id='compile.classpath'> <fileset dir='libs'> <include name='*.jar'/> </fileset> </path> <target name='compile' description='Compile source files.'> <javac includeantruntime="false" srcdir="src" destdir="bin"> <classpath refid='compile.classpath'/> </javac> <target> <target name='jar' description='Create a jar file for distribution.' depends="compile"> <jar destfile='${build}/${distribution}.jar'> <fileset dir="bin"/> <zipgroupfileset dir="libs" includes="*.jar"/> </jar> </target> The Secodnary project's build.xml is nearly identical except that it features a manifest as it needs to run: <target name='jar' description='Create a jar file for distribution.' depends="compile"> <jar destfile='${dist}/${distribution}.jar' basedir="${build}" > <fileset dir="${build}"/> <zipgroupfileset dir="libs" includes="*.jar"/> <manifest> <attribute name="Main-Class" value="lu.tudor.ssi.kiss.climate.ClimateChange"/> </manifest> </jar> </target> After I got it working, trying for many hours to not include that dependencies as class files but as Jars, I don't have the time or insight to go back and try to figure out what I did wrong. Furthermore, I believe that including these libraries as class files is bad practice as it could give rise to licensing issues while not packaging them and merely including them in a directory along the build Jar would most probably not (And if it would you could choose not to distribute them yourself). I think my inability to correctly assemble the class path, I always received NoClassDefFoundError for classes or libraries in the Primary project when launching Second's Jar, is that I am not very experienced with Ant. Would I require to specify a class path for both projects? Specifying the class path as . should have allowed me to simply add all dependencies to the same folder as Secondary's Jar, should it not?

    Read the article

  • Sending XML to Servlet from Action Script

    - by John Doe
    I am only getting empty arrays on output. Anyone know what Exactly I'm doing wrong? package myDungeonAccessor; /* * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class myDungeonAccessorServlet extends HttpServlet { private myDungeonAccessor dataAccessor; /** * Processes requests for both HTTP <code>GET</code> and <code>POST</code> methods. * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { /* TODO output your page here out.println("<html>"); out.println("<head>"); out.println("<title>Servlet myDungeonAccessorServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet myDungeonAccessorServlet at " + request.getContextPath () + "</h1>"); out.println("</body>"); out.println("</html>"); */ } finally { out.close(); } } // <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on the left to edit the code."> /** * Handles the HTTP <code>GET</code> method. * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); // PrintWriter out = response.getWriter(); System.out.println("yo mom"); } /** * Handles the HTTP <code>POST</code> method. * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { //System.out.println("heppo"); //dataAccessor = new myDungeonAccessor(); System.out.println("Hello"); try { System.out.println("HEADERS: " + request.getHeaderNames()); ObjectInputStream in = new ObjectInputStream(request.getInputStream()); ObjectOutputStream out = new ObjectOutputStream(response.getOutputStream()); } catch(Exception e) { e.printStackTrace(); } System.out.println("WAZZUP"); byte [] buffer = new byte[4096]; //in.read(buffer); System.out.println("TEST!"); String s = new String(buffer); System.out.println("Update S:" + s); } /** * Returns a short description of the servlet. * @return a String containing servlet description */ @Override public String getServletInfo() { return "Short description"; } }

    Read the article

  • C++ copy-construct construct-and-assign question

    - by Andy
    Blockquote Here is an extract from item 56 of the book "C++ Gotchas": It's not uncommon to see a simple initialization of a Y object written any of three different ways, as if they were equivalent. Y a( 1066 ); Y b = Y(1066); Y c = 1066; In point of fact, all three of these initializations will probably result in the same object code being generated, but they're not equivalent. The initialization of a is known as a direct initialization, and it does precisely what one might expect. The initialization is accomplished through a direct invocation of Y::Y(int). The initializations of b and c are more complex. In fact, they're too complex. These are both copy initializations. In the case of the initialization of b, we're requesting the creation of an anonymous temporary of type Y, initialized with the value 1066. We then use this anonymous temporary as a parameter to the copy constructor for class Y to initialize b. Finally, we call the destructor for the anonymous temporary. To test this, I did a simple class with a data member (program attached at the end) and the results were surprising. It seems that for the case of b, the object was constructed by the copy constructor rather than as suggested in the book. Does anybody know if the language standard has changed or is this simply an optimisation feature of the compiler? I was using Visual Studio 2008. Code sample: #include <iostream> class Widget { std::string name; public: // Constructor Widget(std::string n) { name=n; std::cout << "Constructing Widget " << this->name << std::endl; } // Copy constructor Widget (const Widget& rhs) { std::cout << "Copy constructing Widget from " << rhs.name << std::endl; } // Assignment operator Widget& operator=(const Widget& rhs) { std::cout << "Assigning Widget from " << rhs.name << " to " << this->name << std::endl; return *this; } }; int main(void) { // construct Widget a("a"); // copy construct Widget b(a); // construct and assign Widget c("c"); c = a; // copy construct! Widget d = a; // construct! Widget e = "e"; // construct and assign Widget f = Widget("f"); return 0; } Output: Constructing Widget a Copy constructing Widget from a Constructing Widget c Assigning Widget from a to c Copy constructing Widget from a Constructing Widget e Constructing Widget f Copy constructing Widget from f I was most surprised by the results of constructing d and e.

    Read the article

  • Canceling in Sqlite

    - by Yusuf
    I am trying to use handle database with insert, update, and delete such as notepad. I'm having problems in canceling data .In normal case which presses the confirm button, it will be saved into sqlite and will be displayed on listview. How can I make cancel event through back key or more button event? I want my Button and back key to cancel data but its keep on saving... public static int numTitle = 1; public static String curDate = ""; private EditText mTitleText; private EditText mBodyText; private Long mRowId; private NotesDbAdapter mDbHelper; private TextView mDateText; private boolean isOnBackeyPressed; public SQLiteDatabase db; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mDbHelper = new NotesDbAdapter(this); mDbHelper.open(); setContentView(R.layout.note_edit); setTitle(R.string.edit_note); mTitleText = (EditText) findViewById(R.id.etTitle_NE); mBodyText = (EditText) findViewById(R.id.etBody_NE); mDateText = (TextView) findViewById(R.id.tvDate_NE); long msTime = System.currentTimeMillis(); Date curDateTime = new Date(msTime); SimpleDateFormat formatter = new SimpleDateFormat("d'/'M'/'y"); curDate = formatter.format(curDateTime); mDateText.setText("" + curDate); Button confirmButton = (Button) findViewById(R.id.bSave_NE); Button cancelButton = (Button) findViewById(R.id.bCancel_NE); Button deleteButton = (Button) findViewById(R.id.bDelete_NE); mRowId = (savedInstanceState == null) ? null : (Long) savedInstanceState .getSerializable(NotesDbAdapter.KEY_ROWID); if (mRowId == null) { Bundle extras = getIntent().getExtras(); mRowId = extras != null ? extras.getLong(NotesDbAdapter.KEY_ROWID) : null; } populateFields(); confirmButton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { setResult(RESULT_OK); Toast.makeText(NoteEdit.this, "Saved", Toast.LENGTH_SHORT) .show(); finish(); } }); deleteButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub mDbHelper.deleteNote(mRowId); Toast.makeText(NoteEdit.this, "Deleted", Toast.LENGTH_SHORT) .show(); finish(); } }); cancelButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub boolean diditwork = true; try { db.beginTransaction(); populateFields(); db.setTransactionSuccessful(); } catch (SQLException e) { diditwork = false; } finally { db.endTransaction(); if (diditwork) { Toast.makeText(NoteEdit.this, "Canceled", Toast.LENGTH_SHORT).show(); } } } }); } private void populateFields() { if (mRowId != null) { Cursor note = mDbHelper.fetchNote(mRowId); startManagingCursor(note); mTitleText.setText(note.getString(note .getColumnIndexOrThrow(NotesDbAdapter.KEY_TITLE))); mBodyText.setText(note.getString(note .getColumnIndexOrThrow(NotesDbAdapter.KEY_BODY))); } } @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); saveState(); outState.putSerializable(NotesDbAdapter.KEY_ROWID, mRowId); } public void onBackPressed() { super.onBackPressed(); isOnBackeyPressed = true; finish(); } @Override protected void onPause() { super.onPause(); if (!isOnBackeyPressed) saveState(); } @Override protected void onResume() { super.onResume(); populateFields(); } private void saveState() { String title = mTitleText.getText().toString(); String body = mBodyText.getText().toString(); if (mRowId == null) { long id = mDbHelper.createNote(title, body, curDate); if (id > 0) { mRowId = id; } } else { mDbHelper.updateNote(mRowId, title, body, curDate); } }`enter code here`

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • Why is my program getting slower and slower ?

    - by RedWolf
    I'm using the program to send data from database to the Excel file . It works fine at the beginning and then becomes more and more slowly,finally it run out of the memory and the following error ocurrs: "java.lang.OutOfMemoryError: Java heap space...". The problem can be resolved by adding the jvm heap sapce.But the question is that it spends too much time to run out the program. After several minutes,it finished a loop with 4 seconds which can be finished with 0.5 seconds at the beginning . I can't found a solution to make it always run in a certain speed. Is it my code problem? Any clues on this? Here is the code: public void addAnswerRow(List<FinalUsers> finalUsersList,WritableWorkbook book){ if (finalUsersList.size() >0 ) { try { WritableSheet sheet = book.createSheet("Answer", 0); int colCount = 0; sheet.addCell(new Label(colCount++,0,"Number")); sheet.addCell(new Label(colCount++,0,"SchoolNumber")); sheet.addCell(new Label(colCount++,0,"District")); sheet.addCell(new Label(colCount++,0,"SchoolName")); sheet.setColumnView(1, 15); sheet.setColumnView(3, 25); List<Elements> elementsList = this.elementsManager.getObjectElementsByEduTypeAndQuestionnaireType(finalUsersList.get(0).getEducationType().getId(), this.getQuestionnaireByFinalUsersType(finalUsersList.get(0).getFinalUsersType().getId())); Collections.sort(elementsList, new Comparator<Elements>(){ public int compare(Elements o1, Elements o2) { for(int i=0; i< ( o1.getItemNO().length()>o2.getItemNO().length()? o2.getItemNO().length(): o1.getItemNO().length());i++){ if (CommonFun.isNumberic(o1.getItemNO().substring(0, o1.getItemNO().length()>3? 4: o1.getItemNO().length()-1)) && !CommonFun.isNumberic(o2.getItemNO().substring(0, o2.getItemNO().length()>3? 4: o2.getItemNO().length()-1))){ return 1; } if (!CommonFun.isNumberic(o1.getItemNO().substring(0, o1.getItemNO().length()>3? 4: o1.getItemNO().length()-1)) && CommonFun.isNumberic(o2.getItemNO().substring(0,o2.getItemNO().length()>3? 4:o2.getItemNO().length()-1))){ return -1; } if ( o1.getItemNO().charAt(i)!=o2.getItemNO().charAt(i) ){ return o1.getItemNO().charAt(i)-o2.getItemNO().charAt(i); } } return o1.getItemNO().length()> o2.getItemNO().length()? 1:-1; }}); for (Elements elements : elementsList){ sheet.addCell(new Label(colCount++,0,this.getTitlePre(finalUsersList.get(0).getFinalUsersType().getId(), finalUsersList.get(0).getEducationType().getId())+elements.getItemNO()+elements.getItem().getStem())); } int sheetRowCount =1; int sheetColCount =0; for(FinalUsers finalUsers : finalUsersList){ sheetColCount =0; sheet.addCell(new Label(sheetColCount++,sheetRowCount,String.valueOf(sheetRowCount))); sheet.addCell(new Label(sheetColCount++,sheetRowCount,finalUsers.getSchool().getSchoolNumber())); sheet.addCell(new Label(sheetColCount++,sheetRowCount,finalUsers.getSchool().getDistrict().getDistrictNumber().toString().trim())); sheet.addCell(new Label(sheetColCount++,sheetRowCount,finalUsers.getSchool().getName())); List<AnswerLog> answerLogList = this.answerLogManager.getAnswerLogByFinalUsers(finalUsers.getId()); Map<String,String> answerMap = new HashMap<String,String>(); for(AnswerLog answerLog :answerLogList ){ if (answerLog.getOptionsId() != null) { answerMap.put(answerLog.getElement().getItemNO(), this.getOptionsAnswer(answerLog.getOptionsId())); }else if (answerLog.getBlanks()!= null){ answerMap.put(answerLog.getElement().getItemNO(), answerLog.getBlanks()); }else{ answerMap.put(answerLog.getElement().getItemNO(), answerLog.getSubjectiveItemContent()); } } for (Elements elements : elementsList){ sheet.addCell(new Label(sheetColCount++,sheetRowCount,null==answerMap.get(elements.getItemNO())?"0":answerMap.get(elements.getItemNO()))); } sheetRowCount++; } book.write(); book.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (RowsExceededException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (WriteException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • how to create text file in window service

    - by angel ansari
    Hi, I have an XML file <config> <ServiceName>autorunquery</ServiceName> <DBConnection> <server>servername</server> <user>xyz</user> <password>klM#2bs</password> <initialcatelog>TEST</initialcatelog> </DBConnection> <Log> <logfilename>d:\testlogfile.txt</logfilename> </Log> <Frequency> <value>10</value> <unit>minute</unit> </Frequency> <CheckQuery>select * from credit_debit1 where station='Corporate'</CheckQuery> <Queries total="3"> <Query id="1">Update credit_debit1 set station='xxx' where id=2</Query> <Query id="2">Update credit_debit1 set station='xxx' where id=4</Query> <Query id="3">Update credit_debit1 set station='xxx' where id=9</Query> </Queries> </config> using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; using System.IO; using System.Xml; namespace Service1 { public partial class Service1 : ServiceBase { XmlTextReader reader = null; string path = null; FileStream fs = null; StreamWriter sw = null; public Service1() { InitializeComponent(); } protected override void OnStart(string[] args) { timer1.Enabled = true; timer1.Interval = 10000; timer1.Start(); logfile("start service"); } protected override void OnStop() { timer1.Enabled = false; timer1.Stop(); logfile("stop service"); } private void logfile(string content) { try { reader = new XmlTextReader("queryconfig.xml");//xml file name which is in current directory if (reader.ReadToFollowing("logfilename")) { path = reader.ReadElementContentAsString(); } fs = new FileStream(path, FileMode.Append, FileAccess.Write); sw = new StreamWriter(fs); sw.Write(content); sw.WriteLine(DateTime.Now.ToString()); } catch (Exception ex) { sw.Write(ex.ToString()); throw; } finally { if (reader != null) reader.Close(); if (sw != null) sw.Close(); if (fs != null) fs.Close(); } } } } My problem is that the file is not created.

    Read the article

  • Using Dynamic Proxies to centralize JPA code

    - by Daziplqa
    Hi All, Actually, This is not a question but really I need your opinions in a matter... I put his post here because I know you always active, so please don't consider this a bad question and share me your opinions. I've used Java dynamic proxies to Centralize The code of JPA that I used in a standalone mode, and Here's the dynamic proxy code: package com.forat.service; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.util.logging.Level; import java.util.logging.Logger; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.EntityTransaction; import javax.persistence.Persistence; import com.forat.service.exceptions.DAOException; /** * Example of usage : * <pre> * OnlineFromService onfromService = * (OnlineFromService) DAOProxy.newInstance(new OnlineFormServiceImpl()); * try { * Student s = new Student(); * s.setName("Mohammed"); * s.setNationalNumber("123456"); * onfromService.addStudent(s); * }catch (Exception ex) { * System.out.println(ex.getMessage()); * } *</pre> * @author mohammed hewedy * */ public class DAOProxy implements InvocationHandler{ private Object object; private Logger logger = Logger.getLogger(this.getClass().getSimpleName()); private DAOProxy(Object object) { this.object = object; } public static Object newInstance(Object object) { return Proxy.newProxyInstance(object.getClass().getClassLoader(), object.getClass().getInterfaces(), new DAOProxy(object)); } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { EntityManagerFactory emf = null; EntityManager em = null; EntityTransaction et = null; Object result = null; try { emf = Persistence.createEntityManagerFactory(Constants.UNIT_NAME); em = emf.createEntityManager();; Method entityManagerSetter = object.getClass(). getDeclaredMethod(Constants.ENTITY_MANAGER_SETTER_METHOD, EntityManager.class); entityManagerSetter.invoke(object, em); et = em.getTransaction(); et.begin(); result = method.invoke(object, args); et.commit(); return result; }catch (Exception ex) { et.rollback(); Throwable cause = ex.getCause(); logger.log(Level.SEVERE, cause.getMessage()); if (cause instanceof DAOException) throw new DAOException(cause.getMessage(), cause); else throw new RuntimeException(cause.getMessage(), cause); }finally { em.close(); emf.close(); } } } And here's the link that contains more info (http://m-hewedy.blogspot.com/2010/04/using-dynamic-proxies-to-centralize-jpa.html) (plz don't consider this as adds, as I can copy and past the entire topic here if you want that) So, Please give me your opinions. Thanks.

    Read the article

  • Can I create an xml that specifies element from 2 nested xsd's without using a prefixes?

    - by TweeZz
    I have 2 xsd's which are nested: DefaultSchema.xsd: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="DefaultSchema" targetNamespace="http://myNamespace.com/DefaultSchema.xsd" elementFormDefault="qualified" xmlns="http://myNamespace.com/DefaultSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" > <xs:complexType name="ZForm"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Part" minOccurs="0" maxOccurs="unbounded" type="Part"/> </xs:sequence> <xs:attribute name="Title" use="required" type="xs:string"/> <xs:attribute name="Version" type="xs:int"/> </xs:complexType> <xs:complexType name="Part"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Label" type="Label" minOccurs="0"></xs:element> </xs:sequence> <xs:attribute name="Title" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="Label"> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="Title" type="xs:string"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:schema> ExportSchema.xsd: (this one kinda wraps 1 more element (ZForms) around the main element (ZForm) of the DefaultSchema) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="ExportSchema" targetNamespace="http://myNamespace.com/ExportSchema.xsd" elementFormDefault="qualified" xmlns="http://myNamespace.com/DefaultSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:es="http://myNamespace.com/ExportSchema.xsd" > <xs:import namespace="http://myNamespace.com/DefaultSchema.xsd" schemaLocation="DefaultSchema.xsd"/> <xs:element name="ZForms" type="es:ZFormType"></xs:element> <xs:complexType name="ZFormType"> <xs:sequence> <xs:element name="ZForm" type="ZForm" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:schema> And then finally I have a generated xml: <?xml version="1.0" encoding="utf-8"?> <ZForms xmlns="http://myNamespace.com/ExportSchema.xsd"> <ZForm Version="1" Title="FormTitle"> <Part Title="PartTitle" > <Label Title="LabelTitle" /> </Part> </ZForm> </ZForms> Visual studio complains it doesn't know what 'Part' is. I was hoping I do not need to use xml namespace prefixes (..) to make this xml validate, since ExportSchema.xsd has a reference to the DefaultSChema.xsd. Is there any way to make that xml structure valid without explicitly specifying the DefaultSchema.xsd? Or is this a no go?

    Read the article

  • Is it possible to shuffle a 2D matrix while preserving row AND column frequencies?

    - by j_random_hacker
    Suppose I have a 2D array like the following: GACTG AGATA TCCGA Each array element is taken from a small finite set (in my case, DNA nucleotides -- {A, C, G, T}). I would like to randomly shuffle this array somehow while preserving both row and column nucleotide frequencies. Is this possible? Can it be done efficiently? [EDIT]: By this I mean I want to produce a new matrix where each row has the same number of As, Cs, Gs and Ts as the corresponding row of the original matrix, and where each column has the same number of As, Cs, Gs and Ts as the corresponding column of the original matrix. Permuting the rows or columns of the original matrix will not achieve this in general. (E.g. for the example above, the top row has 2 Gs, and 1 each of A, C and T; if this row was swapped with row 2, the top row in the resulting matrix would have 3 As, 1 G and 1 T.) It's simple enough to preserve just column frequencies by shuffling a column at a time, and likewise for rows. But doing this will in general alter the frequencies of the other kind. My thoughts so far: If it's possible to pick 2 rows and 2 columns so that the 4 elements at the corners of this rectangle have the pattern XY YX for some pair of distinct elements X and Y, then replacing these 4 elements with YX XY will maintain both row and column frequencies. In the example at the top, this can be done for (at least) rows 1 and 2 and columns 2 and 5 (whose corners give the 2x2 matrix AG;GA), and for rows 1 and 3 and columns 1 and 4 (whose corners give GT;TG). Clearly this could be repeated a number of times to produce some level of randomisation. Generalising a bit, any "subrectangle" induced by a subset of rows and a subset of columns, in which the frequencies of all rows are the same and the frequencies of all columns are the same, can have both its rows and columns permuted to produce a valid complete rectangle. (Of these, only those subrectangles in which at least 1 element is changed are actually interesting.) Big questions: Are all valid complete matrices reachable by a series of such "subrectangle rearrangements"? I suspect the answer is yes. Are all valid subrectangle rearrangements decomposable into a series of 2x2 swaps? I suspect the answer is no, but I hope it's yes, since that would seem to make it easier to come up with an efficient algorithm. Can some or all of the valid rearrangements be computed efficiently? This question addresses a special case in which the set of possible elements is {0, 1}. The solutions people have come up with there are similar to what I have come up with myself, and are probably usable, but not ideal as they require an arbitrary amount of backtracking to work correctly. Also I'm concerned that only 2x2 swaps are considered. Finally, I would ideally like a solution that can be proven to select a matrix uniformly at random from the set of all matrices having identical row frequencies and column frequencies to the original. I know, I'm asking for a lot :)

    Read the article

  • SOAP and NHibernate Session in C#

    - by Anonymous Coward
    In a set of SOAP web services the user is authenticated with custom SOAP header (username/password). Each time the user call a WS the following Auth method is called to authenticate and retrieve User object from NHibernate session: [...] public Services : Base { private User user; [...] public string myWS(string username, string password) { if( Auth(username, password) ) { [...] } } } public Base : WebService { protected static ISessionFactory sesFactory; protected static ISession session; static Base { Configuration conf = new Configuration(); [...] sesFactory = conf.BuildSessionFactory(); } private bool Auth(...) { session = sesFactory.OpenSession(); MembershipUser user = null; if (UserCredentials != null && Membership.ValidateUser(username, password)) { luser = Membership.GetUser(username); } ... try { user = (User)session.Get(typeof(User), luser.ProviderUserKey.ToString()); } catch { user = null; throw new [...] } return user != null; } } When the WS work is done the session is cleaned up nicely and everything works: the WSs create, modify and change objects and Nhibernate save them in the DB. The problems come when an user (same username/password) calls the same WS at same time from different clients (machines). The state of the saved objects are inconsistent. How do I manage the session correctly to avoid this? I searched and the documentation about Session management in NHibernate is really vast. Should I Lock over user object? Should I set up a "session share" management between WS calls from same user? Should I use Transaction in some savvy way? Thanks Update1 Yes, mSession is 'session'. Update2 Even with a non-static session object the data saved in the DB are inconsistent. The pattern I use to insert/save object is the following: var return_value = [...]; try { using(ITransaction tx = session.Transaction) { tx.Begin(); MyType obj = new MyType(); user.field = user.field - obj.field; // The fields names are i.e. but this is actually what happens. session.Save(user); session.Save(obj); tx.Commit(); return_value = obj.another_field; } } catch ([...]) { // Handling exceptions... } finally { // Clean up session.Flush(); session.Close(); } return return_value; All new objects (MyType) are correctly saved but the user.field status is not as I would expect. Even obj.another_field is correct (the field is an ID with generated=on-save policy). It is like 'user.field = user.field - obj.field;' is executed more times then necessary.

    Read the article

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • JDBC Lock a row using SELECT FOR UPDATE, doesn't work

    - by Rachid
    I am having issues with MySQL's SELECT .. FOR UPDATE, here is the query I am trying to run: SELECT * FROM tableName WHERE HostName='UnknownHost' ORDER BY UpdateTimestamp asc limit 1 FOR UPDATE After this, the concerned thread will do an UPDATE and change the HostName, which is then it should unlock the row. I am running a multi-threaded java application, so 3 threads are running this SQL statement, but when thread 1 runs this, it doesn't lock its results from thread 2 & 3. Therefore threads 2 & 3 are getting the same results and they could update the same row. Also each thread is on its own mysql connection. I'm using Innodb, with transaction-isolation = READ-COMMITTED, and the Autocommit is off before executing the select for update may I miss something? OR perhaps there is a better solution? Thanks a lot. Code : public BasicJDBCDemo() { Le_Thread newThread1=new Le_Thread(); Le_Thread newThread2=new Le_Thread(); newThread1.start(); newThread2.start(); } Thread : class Le_Thread extends Thread { public void run() { tring name = Thread.currentThread().getName(); System.out.println( name+": Debut."); long oid=Util.doSelectLockTest(name); Util.doUpdateTest(oid,name); } } Select : public static long doSelectLockTest(String threadName) { System.out.println("[OUTPUT FROM SELECT Lock ]...threadName="+threadName); PreparedStatement pst = null; ResultSet rs=null; Connection conn=null; long oid=0; try { String query = "SELECT * FROM table WHERE Host=? ORDER BY Timestamp asc limit 1 FOR UPDATE"; conn=getNewConnection(); pst = conn.prepareStatement(query); pst.setString(1, DbProperties.UnknownHost); System.out.println("pst="+threadName+"__"+pst); rs = pst.executeQuery(); if (rs.first()) { String s = rs.getString("HostName"); oid = rs.getLong("OID"); System.out.println("oid_oldest/host/threadName=="+oid+"/"+s+"/"+threadName); } } catch (SQLException ex) { ex.printStackTrace(); } finally { DBUtil.close(pst); DBUtil.close(rs); DBUtil.close(conn); } return oid; } Please help.... : Result : Thread-1: Debut. Thread-2: Debut. [OUTPUT FROM SELECT Lock ]...threadName=Thread-1 New connection.. [OUTPUT FROM SELECT Lock ]...threadName=Thread-2 New connection.. pst=Thread-2: SELECT * FROM b2biCheckPoint WHERE HostName='UnknownHost' ORDER BY UpdateTimestamp asc limit 1 FOR UPDATE pst=Thread-1: SELECT * FROM b2biCheckPoint WHERE HostName='UnknownHost' ORDER BY UpdateTimestamp asc limit 1 FOR UPDATE oid_oldest/host/threadName==1/UnknownHost/Thread-2 oid_oldest/host/threadName==1/UnknownHost/Thread-1 [Performing UPDATE] ... oid = 1, thread=Thread-2 New connection.. [Performing UPDATE] ... oid = 1, thread=Thread-1 pst_threadname=Thread-2: UPDATE b2bicheckpoint SET HostName='1_host_Thread-2',UpdateTimestamp=1294940161838 where OID = 1 New connection.. pst_threadname=Thread-1: UPDATE b2bicheckpoint SET HostName='1_host_Thread-1',UpdateTimestamp=1294940161853 where OID = 1

    Read the article

  • how to mount partitions from USB drives in Windows using Delphi?

    - by user569556
    Hi. I'm a Delphi programmer. I want to mount all partitions from USB drives in Windows (XP). The OS is doing this automatically but there are situations when such a program is useful. I know how to find if a drive is on USB or not. My code so far is: type STORAGE_QUERY_TYPE = (PropertyStandardQuery = 0, PropertyExistsQuery, PropertyMaskQuery, PropertyQueryMaxDefined); TStorageQueryType = STORAGE_QUERY_TYPE; STORAGE_PROPERTY_ID = (StorageDeviceProperty = 0, StorageAdapterProperty); TStoragePropertyID = STORAGE_PROPERTY_ID; STORAGE_PROPERTY_QUERY = packed record PropertyId: STORAGE_PROPERTY_ID; QueryType: STORAGE_QUERY_TYPE; AdditionalParameters: array[0..9] of AnsiChar; end; TStoragePropertyQuery = STORAGE_PROPERTY_QUERY; STORAGE_BUS_TYPE = (BusTypeUnknown = 0, BusTypeScsi, BusTypeAtapi, BusTypeAta, BusType1394, BusTypeSsa, BusTypeFibre, BusTypeUsb, BusTypeRAID, BusTypeiScsi, BusTypeSas, BusTypeSata, BusTypeMaxReserved = $7F); TStorageBusType = STORAGE_BUS_TYPE; STORAGE_DEVICE_DESCRIPTOR = packed record Version: DWORD; Size: DWORD; DeviceType: Byte; DeviceTypeModifier: Byte; RemovableMedia: Boolean; CommandQueueing: Boolean; VendorIdOffset: DWORD; ProductIdOffset: DWORD; ProductRevisionOffset: DWORD; SerialNumberOffset: DWORD; BusType: STORAGE_BUS_TYPE; RawPropertiesLength: DWORD; RawDeviceProperties: array[0..0] of AnsiChar; end; TStorageDeviceDescriptor = STORAGE_DEVICE_DESCRIPTOR; const IOCTL_STORAGE_QUERY_PROPERTY = $002D1400; var i: Integer; H: THandle; USBDrives: array of Byte; Query: TStoragePropertyQuery; dwBytesReturned: DWORD; Buffer: array[0..1023] of Byte; sdd: TStorageDeviceDescriptor absolute Buffer; begin SetLength(UsbDrives, 0); SetErrorMode(SEM_FAILCRITICALERRORS); for i := 0 to 99 do begin H := CreateFile(PChar('\\.\PhysicalDrive' + IntToStr(i)), 0, FILE_SHARE_READ or FILE_SHARE_WRITE, nil, OPEN_EXISTING, 0, 0); if H <> INVALID_HANDLE_VALUE then begin try dwBytesReturned := 0; FillChar(Query, SizeOf(Query), 0); FillChar(Buffer, SizeOf(Buffer), 0); sdd.Size := SizeOf(Buffer); Query.PropertyId := StorageDeviceProperty; Query.QueryType := PropertyStandardQuery; if DeviceIoControl(H, IOCTL_STORAGE_QUERY_PROPERTY, @Query, SizeOf(Query), @Buffer, SizeOf(Buffer), dwBytesReturned, nil) then if sdd.BusType = BusTypeUsb then begin SetLength(USBDrives, Length(USBDrives) + 1); UsbDrives[High(USBDrives)] := Byte(i); end; finally CloseHandle(H); end; end; end; for i := 0 to High(USBDrives) do begin // end; end. But I don't know how to access partitions on each drive and mounts them. Can you please help me? I searched before I asked and I couldn't find a working code. But if I did not properly then I'm sorry and please show me that topic. Best regards, John

    Read the article

  • Java deadlock problem....

    - by markovuksanovic
    I am using java sockets for communication. On the client side I have some processing and at this point I send an object to the cient. The code is as follows: while (true) { try { Socket server = new Socket("localhost", 3000); OutputStream os = server.getOutputStream(); InputStream is = server.getInputStream(); CommMessage commMessage = new CommMessageImpl(); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(commMessage); os.write(bos.toByteArray()); os.flush(); byte[] buff = new byte[512]; int bytesRead = 0; ByteArrayOutputStream receivedObject = new ByteArrayOutputStream(); while ((bytesRead = is.read(buff)) > -1) { receivedObject.write(buff, 0, bytesRead); System.out.println(receivedObject); } os.close(); Thread.sleep(10000); } catch (IOException e) { } catch (InterruptedException e) { } } Next on the server side I have the following code to read the object and write the response (Which is just an echo message) public void startServer() { Socket client = null; try { server = new ServerSocket(3000); logger.log(Level.INFO, "Waiting for connections."); client = server.accept(); logger.log(Level.INFO, "Accepted a connection from: " + client.getInetAddress()); os = new ObjectOutputStream(client.getOutputStream()); is = new ObjectInputStream(client.getInputStream()); // Read contents of the stream and store it into a byte array. byte[] buff = new byte[512]; int bytesRead = 0; ByteArrayOutputStream receivedObject = new ByteArrayOutputStream(); while ((bytesRead = is.read(buff)) > -1) { receivedObject.write(buff, 0, bytesRead); } // Check if received stream is CommMessage or not contents. CommMessage commMessage = getCommMessage(receivedObject); if (commMessage != null) { commMessage.setSessionState(this.sessionManager.getState().getState()); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(commMessage); os.write(bos.toByteArray()); System.out.println(commMessage.getCommMessageType()); } else { processData(receivedObject, this.sessionManager); } os.flush(); } catch (IOException e) { } finally { try { is.close(); os.close(); client.close(); server.close(); } catch (IOException e) { } } } The above code works ok if I do not try to read data on the client side (If i exclude the code related to reading). But if I have that code, for some reason, I get some kind of deadlock when accessing input streams. Any ideas what I might have done wrong? Thanks in advance.

    Read the article

  • Update transaction in SQL Server 2008 R2 from ASP.Net not working

    - by Amarus
    Hello! Even though I've been a stalker here for ages, this is the first post I'm making. Hopefully, it won't end here and more optimistically future posts might actually be me trying to give a hand to someone else, I do owe this community that much and more. Now, what I'm trying to do is simple and most probably the reason behind it not working is my own stupidity. However, I'm stumped here. I'm working on an ASP.Net website that interacts with an SQL Server 2008 R2 database. So far everything has been going okay but updating a row (or more) just won't work. I even tried copying and pasting code from this site and others but it's always the same thing. In short: No exception or errors are shown when the update command executes (it even gives the correct count of affected rows) but no changes are actually made on the database. Here's a simplified version of my code (the original had more commands and tons of parameters each, but even when it's like this it doesn't work): protected void btSubmit_Click(object sender, EventArgs e) { using (SqlConnection connection = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString)) { string commandString = "UPDATE [impoundLotAlpha].[dbo].[Vehicle]" + "SET [VehicleMake] = @VehicleMake" + " WHERE [ComplaintID] = @ComplaintID"; using (SqlCommand command = new SqlCommand(commandString, connection)) { SqlTransaction transaction = null; try { command.Connection.Open(); transaction = connection.BeginTransaction(IsolationLevel.Serializable); command.Transaction = transaction; SqlParameter complaintID = new SqlParameter("@complaintID", SqlDbType.Int); complaintID.Value = HttpContext.Current.Request.QueryString["complaintID"]; command.Parameters.Add(complaintID); SqlParameter VehicleMake = new SqlParameter("@VehicleMake", SqlDbType.VarChar, 20); VehicleMake.Value = tbVehicleMake.Text; command.Parameters.Add(VehicleMake); command.ExecuteNonQuery(); transaction.Commit(); } catch { transaction.Rollback(); throw; } finally { connection.Close(); } } } } I've tried this with the "SqlTransaction" stuff and without it and nothing changes. Also, since I'm doing multiple updates at once, I want to have them act as a single transaction. I've found that it can be either done like this or by use of the classes included in the System.Transactions namespace (CommittableTransaction, TransactionScope...). I tried all I could find but didn't get any different results. The connection string in web.config is as follows: <connectionStrings> <add name="ApplicationServices" connectionString="Data Source=localhost;Initial Catalog=ImpoundLotAlpha;Integrated Security=True" providerName="System.Data.SqlClient"/> </connectionStrings> So, tldr; version: What is the mistake that I did with that record update attempt? (Figured it out, check below if you're having a similar issue.) What is the best method to gather multiple update commands as a single transaction? Thanks in advance for any kind of help and/or suggestions! Edit: It seems that I was lacking some sleep yesterday cause this time it only took me 5 minutes to figure out my mistake. Apparently the update was working properly but I failed to notice that the textbox values were being overwritten in Page_Load. For some reason I had this part commented: if (IsPostBack) return; The second part of the question still stands. But should I post this as an answer to my own question or keep it like this?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >