Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 504/539 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • Writing more efficient xquery code (avoiding redundant iteration)

    - by Coquelicot
    Here's a simplified version of a problem I'm working on: I have a bunch of xml data that encodes information about people. Each person is uniquely identified by an 'id' attribute, but they may go by many names. For example, in one document, I might find <person id=1>Paul Mcartney</person> <person id=2>Ringo Starr</person> And in another I might find: <person id=1>Sir Paul McCartney</person> <person id=2>Richard Starkey</person> I want to use xquery to produce a new document that lists every name associated with a given id. i.e.: <person id=1> <name>Paul McCartney</name> <name>Sir Paul McCartney</name> <name>James Paul McCartney</name> </person> <person id=2> ... </person> The way I'm doing this now in xquery is something like this (pseudocode-esque): let $ids := distinct-terms( [all the id attributes on people] ) for $id in $ids return <person id={$id}> { for $unique-name in distinct-values ( for $name in ( [all names] ) where $name/@id=$id return $name ) return <name>{$unique-name}</name> } </person> The problem is that this is really slow. I imagine the bottleneck is the innermost loop, which executes once for every id (of which there are about 1200). I'm dealing with a fair bit of data (300 MB, spread over about 800 xml files), so even a single execution of the query in the inner loop takes about 12 seconds, which means that repeating it 1200 times will take about 4 hours (which might be optimistic - the process has been running for 3 hours so far). Not only is it slow, it's using a whole lot of virtual memory. I'm using Saxon, and I had to set java's maximum heap size to 10 GB (!) to avoid getting out of memory errors, and it's currently using 6 GB of physical memory. So here's how I'd really like to do this (in Pythonic pseudocode): persons = {} for id in ids: person[id] = set() for person in all_the_people_in_my_xml_document: persons[person.id].add(person.name) There, I just did it in linear time, with only one sweep of the xml document. Now, is there some way to do something similar in xquery? Surely if I can imagine it, a reasonable programming language should be able to do it (he said quixotically). The problem, I suppose, is that unlike Python, xquery doesn't (as far as I know) have anything like an associative array. Is there some clever way around this? Failing that, is there something better than xquery that I might use to accomplish my goal? Because really, the computational resources I'm throwing at this relatively simple problem are kind of ridiculous.

    Read the article

  • Search for multiple values in an xml column

    - by Yuriy Gettya
    Environment: SQL Server 2012. Primary and secondary (value) index is built on xml column. Say I have a table Message with xml column WordIndex. I also have a table Word which has WordId and WordText. Xml for Message.WordIndex has the following schema: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.com"> <xs:element name="wi"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="w"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="p" type="xs:unsignedByte" /> </xs:sequence> <xs:attribute name="wid" type="xs:unsignedByte" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> and some data to go with it: <wi xmlns="http://www.example.com"> <w wid="1"> <p>28</p> <p>72</p> <p>125</p> </w> <w wid="4"> <p>89</p> </w> <w wid="5"> <p>11</p> </w> </wi> I need to search for multiple values in my xml column WordIndex either using OR or AND. What I'm doing is fairly rudimentary, since I'm a n00b in XQuery (taken from debug output, hence real values): with xmlnamespaces(default 'http://www.example.com') select m.Subject, m.MessageId, m.WordIndex.query(' let $dummy := 0 return <word_list> { for $w in /wi/w where $w/@wid=64 return <word wid="64" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=70 return <word wid="70" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=63 return <word wid="63" pos="{data($w/p)}"/> } </word_list> ') as WordPosition from Message as m -- more joins go here ... where -- more conditions go here ... and m.WordIndex.exist('/wi/w[@wid=64]') = 1 and m.WordIndex.exist('/wi/w[@wid=70]') = 1 and m.WordIndex.exist('/wi/w[@wid=63]') = 1 How can this be optimized?

    Read the article

  • PERL newbie : get a proper minimal debug_mode solution

    - by Michael Mao
    Hi all: I am learning PERL in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debue_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I wanna a real minimal solution that doesn't depend on any other module than the "default" And I cannot have any control on the environment where this script will be executed... Many thanks to the suggestions in advance.

    Read the article

  • Coldfusion Report Builder - How can you set different datasources externally between prod/staging/de

    - by Smooth Operator
    Coldfusion Report Builder is great. One small issue. We use ANT+CFANT to deploy. When we create the report, say in a datasource called MyApp_dev on a dev box. Everything works great when the report is created. We deploy the report to our staging server, which has a datasource of MyApp_Staging. That server also, may or may not, have the live app working under MyApp_Live. Ant pushes the update to Staging just great. Run the report, crashes and burns. Why? It seems the report is looking for the MyApp_Dev data_source, even though the application is using the MyApp_Staging datasource. In digging around I found a few approaches, I would like to do this one, final, ideal way from the beginning instead of having to go back to do dozens of reports differently when I have a new Aha! moment. 1) Obvious: Pass in the datasource in to the cfreport tag. Doesn't work for ColdFusion Builder Reports as of v8, or v9 as tested on Linux. 2) Most realistic option (but painful) so far: Pass in the query as an object into the ColdFusion Builder report. Let's think about this: Create the Report with the report builder to my heart's content using the RDS, etc on my local box. When I'm done, copy the query into a snippet of code, or into a database column to be dynamically be injected at runtime with correct datasource. Modify my "run report" event to find the query from the database column, insert it into another dynamic cfquery and potentially... evaluate (!?!) it? Fun side is I can set the cfquery datasource to what I would need for each environment. When I modify the report's columns in CF Report Builder, I always have to update the query in the database. Is there a snippet of code that can extract this for me? Hmm. 3) Less than ideal. Suck it up and let all the reports in staging run off the live server. Maybe copy the live data into staging (sans structural changes) to let it seem similar. Are there any eloquent ways to accomplish the above? Thanks in Advance!

    Read the article

  • Servlet IOException when streaming a Byte Array

    - by MontyBongo
    I occasionally get an IOException from a Servlet I have that writes a byte array to the outputstream in order to provide a file download capability. This download servlet has a reasonable amount of traffic (100k hits a month) and this exception occurs rarely, around 1-2 times a month. I have tried recreating the exception by using the exact same Base64 String and no exception is raised and the Servlet behaves as designed. Is this IO Exception being caused by something outside my applications control? For example a network issue or the user resetting the connection? I have tried to google the cause of an IOException at this point in the stack but to no avail. The environment is running Tomcat 5.5 on CentOS 5.3 with an Apache HTTP Server acting as a proxy using proxy_ajp. ClientAbortException: java.io.IOException at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBufferjava:366) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:352) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:83) at com.myApp.Download.doPost(Download.java:34) at javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:691) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:469) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:403) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.myApp.EntryServlet.service(EntryServlet.java:278) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at com.myApp.filters.RequestFilter.doFilter(RequestFilter.java:16) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:151) at org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:444) at org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:472) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1286) at java.lang.Thread.run(Thread.java:636) Caused by: java.io.IOException at org.apache.coyote.ajp.AjpAprProcessor.flush(AjpAprProcessor.java:1200) at org.apache.coyote.ajp.AjpAprProcessor$SocketOutputBuffer.doWrite(AjpAprProcessor.java:1285) at org.apache.coyote.Response.doWrite(Response.java:560) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBufferjava:361) And the code in the Download Servlet: @Override protected void doPost (HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException { try { response.setContentType("application/pdf"); response.setHeader("Pragma", ""); response.setHeader("Cache-Control", ""); response.setHeader("Content-Disposition", "Inline; Filename=myPDFFile..pdf"); ServletOutputStream out = response.getOutputStream(); byte[] downloadBytes = Base64.decode((String)request.getAttribute("fileToDownloadBase64")); out.write(downloadBytes); } catch (Base64DecodingException e) { e.printStackTrace(); response.getOutputStream().print("An error occurred"); } }

    Read the article

  • Can I someone point to me what I did wrong? Trying to map VB to Java using JNA to access the library

    - by henry
    Original Working VB_Code Private Declare Function ConnectReader Lib "rfidhid.dll" () As Integer Private Declare Function DisconnectReader Lib "rfidhid.dll" () As Integer Private Declare Function SetAntenna Lib "rfidhid.dll" (ByVal mode As Integer) As Integer Private Declare Function Inventory Lib "rfidhid.dll" (ByRef tagdata As Byte, ByVal mode As Integer, ByRef taglen As Integer) As Integer Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim desc As String desc = "1. Click ""Connect"" to talk to reader." & vbCr & vbCr desc &= "2. Click ""RF On"" to wake up the TAG." & vbCr & vbCr desc &= "3. Click ""Read Tag"" to get tag PCEPC." lblDesc.Text = desc End Sub Private Sub cmdConnect_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdConnect.Click If cmdConnect.Text = "Connect" Then If ConnectReader() Then cmdConnect.Text = "Disconnect" Else MsgBox("Unable to connect to RFID Reader. Please check reader connection.") End If Else If DisconnectReader() Then cmdConnect.Text = "Connect" End If End If End Sub Private Sub cmdRF_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdRF.Click If cmdRF.Text = "RF On" Then If SetAntenna(&HFF) Then cmdRF.Text = "RF Off" End If Else If SetAntenna(&H0) Then cmdRF.Text = "RF On" End If End If End Sub Private Sub cmdReadTag_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdReadTag.Click Dim tagdata(64) As Byte Dim taglen As Integer, cnt As Integer Dim pcepc As String pcepc = "" If Inventory(tagdata(0), 1, taglen) Then For cnt = 0 To taglen - 1 pcepc &= tagdata(cnt).ToString("X2") Next txtPCEPC.Text = pcepc Else txtPCEPC.Text = "ReadError" End If End Sub Java Code (Simplified) import com.sun.jna.Library; import com.sun.jna.Native; public class HelloWorld { public interface MyLibrary extends Library { public int ConnectReader(); public int SetAntenna (int mode); public int Inventory (byte tagdata, int mode, int taglen); } public static void main(String[] args) { MyLibrary lib = (MyLibrary) Native.loadLibrary("rfidhid", MyLibrary.class); System.out.println(lib.ConnectReader()); System.out.println(lib.SetAntenna(255)); byte[] tagdata = new byte[64]; int taglen = 0; int cnt; String pcepc; pcepc = ""; if (lib.Inventory(tagdata[0], 1, taglen) == 1) { for (cnt = 0; cnt < taglen; cnt++) pcepc += String.valueOf(tagdata[cnt]); } } } The error happens when lib.Inventory is run. lib.Inventory is used to get the tag from the RFID reader. If there is no tag, no error. The error code An unexpected error has been detected by Java Runtime Environment: EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0b1d41ab, pid=5744, tid=4584 Java VM: Java HotSpot(TM) Client VM (11.2-b01 mixed mode windows-x86) Problematic frame: C [rfidhid.dll+0x141ab] An error report file with more information is saved as: C:\eclipse\workspace\FelmiReader\hs_err_pid5744.log

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • How can I import one Gradle script into another?

    - by Ant
    Hi all, I have a complex gradle script that wraps up a load of functionality around building and deploying a number of netbeans projects to a number of environments. The script works very well, but in essence it is all configured through half a dozen maps holding project and environment information. I want to abstract the tasks away into another file, so that I can simply define my maps in a simple build file, and import the tasks from the other file. In this way, I can use the same core tasks for a number of projects and configure those projects with a simple set of maps. Can anyone tell me how I can import one gradle file into another, in a similar manner to Ant's task? I've trawled Gradle's docs to no avail so far. Additional Info After Tom's response below, I thought I'd try and clarify exactly what I mean. Basically I have a gradle script which runs a number of subprojects. However, the subprojects are all Netbeans projects, and come with their own ant build scripts, so I have tasks in gradle to call each of these. My problem is that I have some configuration at the top of the file, such as: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] I then generate tasks such as: projects.each({ task "checkout_$it.shortname" << { // Code to for example check module out from cvs using config from 'it'. } }) I have many of these sort of task generation snippets, and all of them are generic - they entirely depend on the config in the projects list. So what I want is a way to put this in a separate script and import it in the following sort of way: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] import("tasks.gradle") // This will import and run the script so that all tasks are generated for the projects given above. So in this example, tasks.gradle will have all the generic task generation code in, and will get run for the projects defined in the main build.gradle file. In this way, tasks.gradle is a file that can be used by all large projects that consist of a number of sub-projects with Netbeans ant build files.

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • Jboss logging issue

    - by balaji
    I'm Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web

    Read the article

  • ASP.NET Dynamically filtering data

    - by Jasper
    For a project I'm working on, we're looking for a way to dynamically add filters to a page which then control the dataoutput in, for instance, a grid. We want to add the filters dynamically because we want the customer to be able to change which properties can be filtered and what filtertype (textbox, dropdown, colourpicker, etc.) should be used. The filter should work as follows: - The customer links a filter to a certain property and specifies the filtertype (for this example: dropdown). - A user control which contains all the filter loads all filters specified - The filters load all values of the specified property as options. The first time the page loads; this would be the values of all items. - Now the user selects a value from one of the filters; the page reloads - Only items which have the specified filter value are retrieved, the user may specify one or more filters at the same time. - Once a user drills down by filtering, only filtervalues of the retrieved items should be used in the other filters. I have the following problems: - When I create the filters runtime, events are lost because the controls get recreated each postback. - I could place the filters in PreInit which should solve this, but then determining which controls should be loaded becomes a problem since loading all environment vars isn't finished yet - I don't know a good way of returning all the filter values to a central point from which I can make a good query. - The query has to be dynamic. I'm using linq which I want to make dynamic so I don't have to select everything everytime. How to make a dynamic select query based on a string stored in the database? - I have to select items based on the filtervalues and then adjust the rest of the filters to the already made selection. That kind of messes up the whole regular databinding sequence. Any help in one of the above would be great! PS: One thing I thought about was passing along filter values in the postback which would have to be recognizable. That way the server could use them for selection and then create the filters and autoselect the previously selected filtervalues. I'm not quite sure how to acheive this though...

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • Motherboard/PSU crippling USB and Sata

    - by celebdor
    I very recently bought a new desktop computer. The motherboard is: Z77MX-D3H and the power supply is ocz zs series 550w. The issue I have is that once I boot to the operating system (I have tried with fedora and Ubuntu with kernels 2.6.38 - 3.4.0), my hard drive (2.5" Magnetic) occasionally makes a power switch noise and it resets. Needless to say, when this drive is the OS drive, the OS crashes. I also have a SSD that works fine with the same OS configurations, but if I have the magnetic hard drive attached as second drive, it works erratically and the reconnects result in corrupted data. I also noticed that whenever I plug an external hard drive USB2.0 or USB3.0 to the computer the issue with the reconnects is even worse: [ 52.198441] sd 7:0:0:0: [sdc] Spinning up disk... [ 57.955811] usb 4-3: USB disconnect, device number 3 [ 58.023687] .ready [ 58.023914] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.023919] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.023932] sd 7:0:0:0: [sdc] Sense not available. [ 58.024061] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024063] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024064] sd 7:0:0:0: [sdc] Sense not available. [ 58.024099] sd 7:0:0:0: [sdc] Write Protect is off [ 58.024101] sd 7:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 58.024135] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024137] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024400] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.024402] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024405] sd 7:0:0:0: [sdc] Sense not available. [ 58.024448] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024450] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024451] sd 7:0:0:0: [sdc] Sense not available. [ 58.024469] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024471] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024472] sd 7:0:0:0: [sdc] Attached SCSI disk [ 58.407725] usb 4-3: new SuperSpeed USB device number 4 using xhci_hcd [ 58.424921] scsi8 : usb-storage 4-3:1.0 [ 59.424185] scsi 8:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 59.424406] scsi 8:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 59.425098] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 59.425176] ses 8:0:0:1: Attached Enclosure device [ 59.425248] ses 8:0:0:1: Attached scsi generic sg3 type 13 [ 61.845836] sd 8:0:0:0: [sdc] 976707584 512-byte logical blocks: (500 GB/465 GiB) [ 61.845838] sd 8:0:0:0: [sdc] 4096-byte physical blocks [ 61.846336] sd 8:0:0:0: [sdc] Write Protect is off [ 61.846338] sd 8:0:0:0: [sdc] Mode Sense: 47 00 10 08 [ 61.846718] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.846720] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.848105] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.848106] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.857147] sdc: sdc1 [ 61.858915] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.858916] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.858918] sd 8:0:0:0: [sdc] Attached SCSI disk [ 69.875809] usb 4-3: USB disconnect, device number 4 [ 70.275816] usb 4-3: new SuperSpeed USB device number 5 using xhci_hcd [ 70.293063] scsi9 : usb-storage 4-3:1.0 [ 71.292257] scsi 9:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 71.292505] scsi 9:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 71.293527] sd 9:0:0:0: Attached scsi generic sg2 type 0 [ 71.293668] ses 9:0:0:1: Attached Enclosure device [ 71.293758] ses 9:0:0:1: Attached scsi generic sg3 type 13 [ 73.323804] usb 4-3: USB disconnect, device number 5 [ 101.868078] ses 9:0:0:1: Device offlined - not ready after error recovery [ 101.868124] ses 9:0:0:1: Failed to get diagnostic page 0x50000 [ 101.868131] ses 9:0:0:1: Failed to bind enclosure -19 [ 101.868288] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868292] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868296] sd 9:0:0:0: [sdc] Sense not available. [ 101.868428] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868434] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868439] sd 9:0:0:0: [sdc] Sense not available. [ 101.868468] sd 9:0:0:0: [sdc] Write Protect is off [ 101.868473] sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 101.868580] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868584] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868845] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868849] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868854] sd 9:0:0:0: [sdc] Sense not available. [ 101.868894] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868898] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868903] sd 9:0:0:0: [sdc] Sense not available. [ 101.868961] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868966] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868969] sd 9:0:0:0: [sdc] Attached SCSI disk Now, if I plug the same drive to the powered usb 2.0 hub of my monitor, the issue is not reproduced (at least on a 20h long operation). Also the issue of the usb reconnects is less frequent if the hard drive is plugged before I switch on the computer. Does anybody have some advice as to what I could do? Which is the faulty part/s that I should replace? As for me, I really don't know if to point my finger to the PSU or the Motherboard (I have updated to the latest firmware and checked the BIOS settings several times). EDIT: The reconnects are happening both in the Sata connected drives and the USBX connected drives.

    Read the article

  • With dnsmasq as the DNS server, 'dig' and 'ping' succeed while 'nslookup' fails

    - by einpoklum
    I installed dnsmasq on a machine of mine (It's a Kubuntu 12.04 LTS), backed only by /etc/hosts (no connection to the Internet until later). Now, if I dig mymachine, I get 192.168.0.1, but if I try to nslookup mymachine, I get: >> connection timed out; no servers could be reached Tried also nslookup mymachine.mynicedomain.org - didn't work either. pinging (Edit:) succeeds. This happens both on the server machine itself and on other machines on the network. How can I the DNS lookups to work? What problem is preventing nslookup from succeeding? Additional Information In the server's /etc/hosts: 192.168.0.1 mymachine In the server's nsswitch.conf: hosts: files mdns4_mininal [NOTFOUND=return] dns mdns4 (admittedly, this is a bit weird; but I also tried: hosts: files dns instead, with the same effect) In resolv.conf (which is generated by dnsmasq): nameserver 127.0.0.1 search mynicedomain.org In the server's /etc/hosts.allow: domain: ALL In the other machines' /etc/resolv.conf (this is set by the DHCP client): nameserver 192.168.0.1 search mynicedomain.org Relevant netstat output on the server: Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN tcp 0 0 192.168.0.1:53 0.0.0.0:* LISTEN Finally, here's the ipconfig output from one of the client machines on the network (running Windows 7): Connection-specific DNS Suffix . : mynicedomain.org Description . . . . . . . . . . . : Intel(R) 82579LM Gigabit Network Connection Physical Address. . . . . . . . . : 12-34-56-78-9A-BC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.0.50(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Sunday, October 20th 2013 16:20:25 Lease Expires . . . . . . . . . . : Sunday, October 20th 2013 18:20:24 Default Gateway . . . . . . . . . : 192.168.0.1 DHCP Server . . . . . . . . . . . : 192.168.0.1 DNS Servers . . . . . . . . . . . : 192.168.0.1 NetBIOS over Tcpip. . . . . . . . : Enabled Notes: May be related to this question.

    Read the article

  • Printf in assembler doesn't print

    - by Gaim
    Hi there, I have got a homework to hack program using buffer overflow ( with disassambling, program was written in C++, I haven't got the source code ). I have already managed it but I have a problem. I have to print some message on the screen, so I found out address of printf function, pushed address of "HACKED" and address of "%s" on the stack ( in this order ) and called that function. Called code passed well but nothing had been printed. I have tried to simulate the environment like in other place in the program but there has to be something wrong. Do you have any idea what I am doing wrong that I have no output, please? Thanks a lot EDIT: This program is running on Windows XP SP3 32b, written in C++, Intel asm there is the "hack" code CPU Disasm Address Hex dump Command Comments 0012F9A3 90 NOP ;hack begins 0012F9A4 90 NOP 0012F9A5 90 NOP 0012F9A6 89E5 MOV EBP,ESP 0012F9A8 83EC 7F SUB ESP,7F ;creating a place for working data 0012F9AB 83EC 7F SUB ESP,7F 0012F9AE 31C0 XOR EAX,EAX 0012F9B0 50 PUSH EAX 0012F9B1 50 PUSH EAX 0012F9B2 50 PUSH EAX 0012F9B3 89E8 MOV EAX,EBP 0012F9B5 83E8 09 SUB EAX,9 0012F9B8 BA 1406EDFF MOV EDX,FFED0614 ;address to jump, it is negative because there mustn't be 00 bytes 0012F9BD F7DA NOT EDX 0012F9BF FFE2 JMP EDX ;I have to jump because there are some values overwritten by the program 0012F9C1 90 NOP 0012F9C2 0090 00000000 ADD BYTE PTR DS:[EAX],DL 0012F9C8 90 NOP 0012F9C9 90 NOP 0012F9CA 90 NOP 0012F9CB 90 NOP 0012F9CC 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9CD 65:6E OUTS DX,BYTE PTR GS:[ESI] ; I/O command 0012F9CF 67:74 68 JE SHORT 0012FA3A ; Superfluous address size prefix 0012F9D2 2069 73 AND BYTE PTR DS:[ECX+73],CH 0012F9D5 203439 AND BYTE PTR DS:[EDI+ECX],DH 0012F9D8 34 2C XOR AL,2C 0012F9DA 2066 69 AND BYTE PTR DS:[ESI+69],AH 0012F9DD 72 73 JB SHORT 0012FA52 0012F9DF 74 20 JE SHORT 0012FA01 0012F9E1 3120 XOR DWORD PTR DS:[EAX],ESP 0012F9E3 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9E4 696E 65 7300909 IMUL EBP,DWORD PTR DS:[ESI+65],-6F6FFF8D 0012F9EB 90 NOP 0012F9EC 90 NOP 0012F9ED 90 NOP 0012F9EE 31DB XOR EBX,EBX ; hack continues 0012F9F0 8818 MOV BYTE PTR DS:[EAX],BL ; writing 00 behind word "HACKED" 0012F9F2 83E8 06 SUB EAX,6 0012F9F5 50 PUSH EAX ; address of "HACKED" 0012F9F6 B8 3B8CBEFF MOV EAX,FFBE8C3B 0012F9FB F7D0 NOT EAX 0012F9FD 50 PUSH EAX ; address of "%s" 0012F9FE B8 FFE4BFFF MOV EAX,FFBFE4FF 0012FA03 F7D0 NOT EAX 0012FA05 FFD0 CALL EAX ;address of printf This code is really ugly because I am new in assembler and there mustn't be null bytes because of buffer-overflow bug

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Windows XP periodically disconnects, reconnects; Windows 7 doesn't

    - by einpoklum
    My setup: I have a PC with a Gigabyte GA-MA78S2H motherboard (Realtek Gigabit wired Ethernet on-board). I have the latest drivers (at least the latest driver for the NIC. I'm connecting via an Edimax BR-6216Mg (again, wired connection). For some reason I experience short periodic disconnects and reconnects. Specifically, Skype disconnects, tries to connect, succeeds after a short while; incoming SFTP sessions get dropped; using a browser, I sometime get stuck in the DNS lookup or connection to the website and a page won't load. A couple of seconds later, a reload works. All this happens with Windows XP SP3. With Windows 7, it doesn't happen. The connection is smooth (OS is sluggish though, but never mind that). Like I said, I updated the NIC driver. I tried reducing the MTU (used something called Dr. TCP), thinking maybe that would help, but it didn't. (I'm a bit but not super-knowledgeable about TCP parameters.) I'm guessing it's either a problem with the driver or some settings which are different between the two OSes. ipconfig for my adapter: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 00-1D-7D-E9-72-9E Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.0.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.254 DHCP Server . . . . . . . . . . . : 192.168.0.254 DNS Servers . . . . . . . . . . . : 192.117.235.235 62.219.186.7 Lease Obtained. . . . . . . . . . : Saturday, March 10, 2012 8:28:20 AM Lease Expires . . . . . . . . . . : Friday, January 26, 1906 2:00:04 AM

    Read the article

  • How do I access SourceGear Web services using SOAP::Lite?

    - by user565793
    For some reason SourceGear provide an undocumented Web service on their installations. They actually ask developers to use the API instead because the Web service is kinda messy, but this is a problem in my case because I cannot use this API on a Perl environment, so their solution for my specific case is to use the Web service. This shouldn't be a problem. Using SOAP::Lite I have connected to several Web services in the past in the same way. But the lack of documentation is a major chaos if you don't know where the SOAP calls can be made. I only have an XML to decipher where to and how to make these calls. It would be great if a real SOAP genius could help me out in this. This is an example of the login call and the expected response: Request POST /fortress/dragnetwebservice.asmx HTTP/1.1 Host: velecloudserver Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://www.sourcegear.com/schemas/dragnet/LoginPlainText" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainText xmlns="http://www.sourcegear.com/schemas/dragnet"> <strLogin>string</strLogin> <strPassword>string</strPassword> </LoginPlainText> </soap:Body> </soap:Envelope> Response HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainTextResponse xmlns="http://www.sourcegear.com/schemas/dragnet"> <LoginPlainTextResult>int</LoginPlainTextResult> <strAuthTicket>string</strAuthTicket> </LoginPlainTextResponse> </soap:Body> </soap:Envelope> I'm looking for a way to be able to assemble this somehow. This is my Perl example: my $soap = SOAP::Lite -> uri ('http://velecloudserver/fortress/dragnetwebservice.asmx') -> proxy('http://velecloudserver/fortress/dragnetwebservice.asmx/LoginPlainText'); my $som = $soap->call('LoginPlainText', SOAP::Data->name('LoginPlainText')->value( \SOAP::Data->value([ SOAP::Data->name('strLogin')->value( 'admin' ), SOAP::Data->name('strPassword')->value('Adm1234'), ])) ); Any tip would be appreciated.

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • Web-Frameworks for Education Management Systems?

    - by Indebi
    So, I'm working on an idea and I'll go into a brief overview of that but my question is, What are some good web frameworks for this situation? I have some experience in the following languages: C# Python I have considerably more experience in C# than Python, however I am expecting to learn new things. My idea is this, a completely web-based community-oriented Education Management System that focuses on making students and teachers day-to-day lives easier. For students it will provide a centralized place for them to do homework, study for tests, and reinforce concepts learned previously in class. For teachers it will give them a centralized place to handle assignments, attendance, homework, tests, and all other major parts of classroom management. All of that, but in a community-oriented fashion. Everything a teacher does is shared and open to constructive criticism, allowing other teachers to use their assignments/tests and for students or other teachers to comment, rate and criticize their assignments. This encourages an environment of openness that will allow teacher's to focus on teaching and student's to focus on learning. And that community wouldn't be limited to one school or school-district, this system would be completely school-independent. Please note that I have no problem with hearing constructive criticism on this idea, however I would prefer if this post was more focused on my question. I have somewhat explored about the following options: Django ASP.NET Ruby on Rails Silverlight (1) I have Django installed and I played with it for a little bit, I really like how easy setting up databases are and how it handles the database completely for you. I don't really know how to use it very well and I don't quite understand the Model-View-Controller paradigm(?) for it yet but I haven't thought about it much. I also like the fact that it uses Python. (2) I don't really like Visual Studio for developing in ASP.NET, I hate the way the web-designer works and it just feels clunky and old. I like the server-side development part though. I don't like how expensive ASP.NET and overall Visual Studio is, even if I do get it for free for now using DreamSpark (3) I haven't been able to explore much with this, I could not get Rails (or maybe Ruby) properly installed. I first installed it within RadRails and that didn't work so I uninstalled RadRails and then installed the latest version of Ruby off the official Windows Installer and then installed Ruby on Rails through gem and even after all that it still didn't work, so I installed Netbeans and attempted to use it there but it still did not work (4) I like Silverlight in some extents, I've played with this one the most, it's very similar to WPF (which I've used the most) in a lot of ways but I don't like how database connectivity works, at least in comparison to Django. I also dislike how expensive everything with Microsoft is, even if I get it for free for now with DreamSpark. I would like to hear some suggestions from experienced web-developers as to what I should use and why, or at least what some good options are for my scenario Your help would be very appreciated

    Read the article

  • A Question about using jython when run a receving socket in python

    - by abusemind
    Hi, I have not a lot of knowledge of python and network programming. Currently I am trying to implement a simple application which can receive a text message sent by the user, fetch some information from the google search api, and return the results via text message to the user. This application will continue to listening to the users messages and reply immediately. How I get the text short message sent by the user? It's a program named fetion from the mobile supplier in China. The client side fetion, just like a instant communication tool, can send/receive messages to/from other people who are using mobile to receive/send SMS. I am using a open source python program that simulates the fetion program. So basically I can use this python program to communate with others who using cell phone via SMS. My core program is based on java, so I need to take this python program into java environment. I am using jython, and now I am available to send messages to users by some lines of java codes. But the real question is the process of receving from users via SMS. In python code, a new thread is created to continuously listen to the user. It should be OK in Python, but when I run the similar process in Jython, the following exception occurs: Exception in thread Thread:Traceback (most recent call last): File "D:\jython2.5.1\Lib\threading.py", line 178, in _Thread__bootstrap self.run() File "<iostream>", line 1389, in run File "<iostream>", line 1207, in receive File "<iostream>", line 1207, in receive File "<iostream>", line 150, in recv File "D:\jython2.5.1\Lib\select.py", line 223, in native_select pobj.register(fd, POLLIN) File "D:\jython2.5.1\Lib\select.py", line 104, in register raise _map_exception(jlx) error: (20000, 'socket must be in non-blocking mode') The line 150 in the python code is as follows: def recv(self,timeout=False): if self.login_type == "HTTP": time.sleep(10) return self.get_offline_msg() pass else: if timeout: infd,outfd,errfd = select([self.__sock,],[],[],timeout)//<---line 150 here else: infd,outfd,errfd = select([self.__sock,],[],[]) if len(infd) != 0: ret = self.__tcp_recv() num = len(ret) d_print(('num',),locals()) if num == 0: return ret if num == 1: return ret[0] for r in ret: self.queue.put(r) d_print(('r',),locals()) if not self.queue.empty(): return self.queue.get() else: return "TimeOut" Because of I am not very familiar with python, especially the socket part, and also new in Jython use, I really need your help or only advice or explanation. Thank you very much!

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >