Search Results

Search found 6852 results on 275 pages for 'ascension systems'.

Page 253/275 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Problem getting Java Streams in HP Tandem (Non-Stop)

    - by AndreaG
    Hi. We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02. When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like java.io.IOException: Value out of range or java.net.SocketException: Value out of range ("value out of range" is Tandem native jargon for, well, quite everything I suppose). Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI? I suppose there is something wrong with the system, but where might it be? Thank you. EDIT: adding snippets as requested sample snippet (a) - using Runtime.exec () (adapted) Properties envVars = new Properties(); Process p = r.exec("/bin/env"); envVars.load(p.getInputStream()); Stack trace (a): java.io.IOException: Value out of range (errno:4034) at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221) at java.io.BufferedInputStream.read1(BufferedInputStream.java:254) at java.io.BufferedInputStream.read(BufferedInputStream.java:313) at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411) at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.readLine(BufferedReader.java:299) at java.io.BufferedReader.readLine(BufferedReader.java:362) at util.Environment.getVariables(Environment.java:39) Last line fails, and output gets redirected to console (!). sample snippet (b) - using HttpURLConnection: public WorkerThread (HttpURLConnection conn, String requestData, Logger logger) { this.conn = conn; ... } public void run () { OutputStream out = conn.getOutputStream (); } Stack trace (b): java.net.SocketException: Value out of range (errno:4034) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:507) at sun.net.NetworkClient.doConnect(NetworkClient.java:155) at sun.net.www.http.HttpClient.openServer(HttpClient.java:365) at sun.net.www.http.HttpClient.openServer(HttpClient.java:477) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230) Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.

    Read the article

  • Cases of companies taking IP rights of your own personal projects developed outside company time

    - by GSS
    Hi, I have heard of cases where a developer working for a company is also making his own personal projects in his own time, using his own equipment yet the company he works for tries to claim ownership for the project. I really find this annoying, and bang out of order. It should also be illegal. I am in this position (work for a company and working on my own systems - from small class libraries used to practise what I learn in my exam revision to a large commercial-scale system). While I don't know if the company will try to take ownership, all I know is they say they do not want a conflict of interest. Fair enough, my system is developed in my own time using my own equipment. They also say that work time should be for work only, which it is. Funny thing that as work is so boring, easy and slow that I have plenty of free time, which I wish I could spend on something productive - said system. The problem is, my company does not take hiring technical talent seriously. This is my first job, I am a junior coder (but my status/position doesn't really reflect what I can do), but I am the only developer. Likewise with the guy who controls Windows Server. As the contract does not say anything about taking ownership, I would assume they would. They would try to milk my success (I've made a good impression so I am sure they would). How can this be allowed? Are there any examples of this happening to any fellow Stacker here? It really makes my blood boil. What I find funny is that my company hardly has the expertise and resources to even be able to successfully run a project of my size. What I do at work is an ASP.NET application consisting of five pages, and even then there are flaws in the project. If I told them that they would also have to take responsibility for flaws in the project, then they would think twice! It's exactly because of this I save the best code for myself and at work I write rubbish code full of code smells. The company don't really care about error handling, as long as the business functionality works (ie a scheduled email sends, but there is no error handling). They'd think twice when they see the embarassment and business cost of a YSOD...

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • What is a good dumbed-down, safe template system for PHP?

    - by Wilhelm
    (Summary: My users need to be able to edit the structure of their dynamically generated web pages without being able to do any damage.) Greetings, ladies and gentlemen. I am currently working on a service where customers from a specific demographic can create a specific type of web site and fill it with their own content. The system is written in PHP. Many of the users of this system wish to edit how their particular web site looks, or, more commonly, have a designer do it for them. Editing the CSS is fine and dandy, but sometimes that's not enough. Sometimes they want to shuffle the entire page structure around by editing the raw HTML of the dynamically created web pages. The templating system used by WordPress is, as far as I can see, perfect for my use. Except for one thing which is critically important. In addition to being able to edit how comments are displayed or where the menu goes, someone editing a template can have that template execute arbitrary PHP code. As the same codebase runs all these different sites, with all content in the same databse, allowing my users to run arbitrary code is clearly out of the question. So what I need, is a dumbed-down, idiot-proof templating system where my users can edit most of the page structure on their own, pulling in the dynamic sections wherever, without being able to even echo 1+1;. Observe the following psuedocode: <!DOCTYPE html> <title><!-- $title --></title> <!-- header() --> <!-- menu() --> <div>Some random custom crap added by the user.</div> <!-- page_content() --> That's the degree of power I'd like to grant my users. They don't need to do their own loops or calculations or anything. Just include my variables and functions and leave the rest to me. I'm sure I'm not the only person on the planet that needs something like this. Do you know of any ready-made templating systems I could use? Thanks in advance for your reply.

    Read the article

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

  • No norwegian characters in LaTeX

    - by DreamCodeR
    Hi, I have translated a document from English to Norwegian in the LaTeX format, and while using norwegian special characters, I get an error using \usepackage[utf8x]{inputenc} to try and display the norwegian (scandinavian) special characters in PostScript/PDF/DVI format, saying Package utf8x Error: MalformedUTF-8sequence. So while that didn't work, I tried out another possible solution: \usepackage{ucs} \usepackage[norsk]babel And when I tried to save that in Emacs I get this message: These default coding systems were tried to encode text in the buffer `lol.tex': (utf-8-unix (905 . 4194277) (916 . 4194245) (945 . 4194278) (950 . 4194277) (954 . 4194296) (990 . 4194277) (1010 . 4194277) (1013 . 4194278) (1051 . 4194277) (1078 . 4194296) (1105 . 4194296)) However, each of them encountered characters it couldn't encode: utf-8-unix cannot encode these: \345 \305 \346 \345 \370 \345 \345 \346 \345 \370 ... Thanks to Emacs I have the possibility to check out the properties of those characters and the first one tells me: character: \345 (4194277, #o17777745, #x3fffe5) preferred charset: eight-bit (Raw bytes 128-255) code point: 0xE5 syntax: w which means: word buffer code: #xE5 file code: not encodable by coding system utf-8-unix display: not encodable for terminal Which doesn't tell me much. When I try to build this with texi2dvi --dvipdf filename.text I get a perfectly fine PDF, all without the special norwegian characters. When I am about to save Emacs also ask me: "Select coding system (default raw-text):" And I type in utf-8 to choose its coding system. I have also tried to choose default raw-text to see if I get some different result. But nothing. At last I tried \lstset{inputencoding=utf8x, extendedchars=\true} ... a code I came over while trying to google the solution to this problem. Which gives me this error: Undefined control sequence. So basically, I have tried every encoding option I have been able to find and nothing works. I am desperately trying to make this work since the norwegian translation must be published before the deadline. As an additional information I may add that I found out later on that I only had the en_US.UTF-8 in my locale, so I added nb_NO.UTF-8 and nb_NO.ISO-8859-15 and ran locale-gen + reboot without any changes. I hope I provided enough information to get some assistance, the characters in question is æ ø å.

    Read the article

  • How to merge a "branch" that isn't really a branch (wasn't created by an svn copy)

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • How to merge an improperly created "branch" that isn't really a branch (wasn't created by an svn cop

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • Segfault (possibly due to casting)

    - by BSchlinker
    I don't normally go to stackoverflow for sigsegv errors, but I have done all I can with my debugger at the moment. The segmentation fault error is thrown following the completion of the function. Any ideas what I'm overlooking? I suspect that it is due to the casting of the sockaddr to the sockaddr_in, but I am unable to find any mistakes there. (Removing that line gets rid of the seg fault -- but I know that may not be the root cause here). // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; // return string string foundIP; // setup the struct for a connection with selected IP if ((rv = getaddrinfo("4.2.2.1", NULL, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // connect the UDP socket to something connect(sockfd, p->ai_addr, p->ai_addrlen); // we need to connect to get the systems local IP // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return foundIP; }

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Microsoft Access and Java JDBC-ODBC Error

    - by user1638362
    Trying to insert some values in a Microsoft access database using java. I can an error however, java.sql.SQLException: [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application Exception in thread "main" java.lang.NullPointerException To create the data source im using SysWoW64 odbcad32 and adding it the datasource to system DNS. I say this as i have seen else where there are problems which occur with 64bit systems. However it still doesn't work for me. Microsoft Office 32bit. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.sql.Statement; public class RMIAuctionHouseJDBC { /** * @param args */ public static void main(String[] args) { String theItem = "Car"; String theClient="KHAN"; String theMessage="1001"; Connection conn =null; // Create connection object try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); System.out.println("Driver Found"); } catch(Exception e) { System.out.println("Driver Not Found"); System.err.println(e); } // connecting to database try{ String database ="jdbc:odbc:Driver={Microsoft Access Driver (*.accdb)};DBQ=AuctionHouseDatabase.accdb;"; conn = DriverManager.getConnection(database,"",""); System.out.println("Conn Found"); } catch(SQLException se) { System.out.println("Conn Not Found"); System.err.println(se); } // Create select statement and execute it try{ /*String insertSQL = "INSERT INTO AuctionHouse VALUES ( " +"'" +theItem+"', " +"'" +theClient+"', " +"'" +theMessage+"')"; */ Statement stmt = conn.createStatement(); String insertSQL = "Insert into AuctionHouse VALUES ('Item','Name','Price')"; stmt.executeUpdate(insertSQL); // Retrieve the results conn.close(); } catch(SQLException se) { System.out.println("SqlStatment Not Found"); System.err.println(se); } } } java.sql.SQLException: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(Unknown Source) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(Unknown Source)

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • Inference engine to calculate matching set according to internal rules

    - by Zecrates
    I have a set of objects with attributes and a bunch of rules that, when applied to the set of objects, provides a subset of those objects. To make this easier to understand I'll provide a concrete example. My objects are persons and each has three attributes: country of origin, gender and age group (all attributes are discrete). I have a bunch of rules, like "all males from the US", which correspond with subsets of this larger set of objects. I'm looking for either an existing Java "inference engine" or something similar, which will be able to map from the rules to a subset of persons, or advice on how to go about creating my own. I have read up on rule engines, but that term seems to be exclusively used for expert systems that externalize the business rules, and usually doesn't include any advanced form of inferencing. Here are some examples of the more complex scenarios I have to deal with: I need the conjunction of rules. So when presented with both "include all males" and "exclude all US persons in the 10 - 20 age group," I'm only interested in the males outside of the US, and the males within the US that are outside the 10 - 20 age group. Rules may have different priorities (explicitly defined). So a rule saying "exclude all males" will override a rule saying "include all US males." Rules may be conflicting. So I could have both an "include all males" and an "exclude all males" in which case the priorities will have to settle the issue. Rules are symmetric. So "include all males" is equivalent to "exclude all females." Rules (or rather subsets) may have meta rules (explicitly defined) associated with them. These meta rules will have to be applied in any case that the original rule is applied, or if the subset is reached via inferencing. So if a meta rule of "exclude the US" is attached to the rule "include all males", and I provide the engine with the rule "exclude all females," it should be able to inference that the "exclude all females" subset is equivalent to the "include all males" subset and as such apply the "exclude the US" rule additionally. I can in all likelihood live without item 5, but I do need all the other properties mentioned. Both my rules and objects are stored in a database and may be updated at any stage, so I'd need to instantiate the 'inference engine' when needed and destroy it afterward.

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • Would a Centralized Blogging Service Work?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models. My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that the main point of a website? Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook?

    Read the article

  • What am I not getting about this abstract class implementation?

    - by Schnapple
    PREFACE: I'm relatively inexperienced in C++ so this very well could be a Day 1 n00b question. I'm working on something whose long term goal is to be portable across multiple operating systems. I have the following files: Utilities.h #include <string> class Utilities { public: Utilities() { }; virtual ~Utilities() { }; virtual std::string ParseString(std::string const& RawString) = 0; }; UtilitiesWin.h (for the Windows class/implementation) #include <string> #include "Utilities.h" class UtilitiesWin : public Utilities { public: UtilitiesWin() { }; virtual ~UtilitiesWin() { }; virtual std::string ParseString(std::string const& RawString); }; UtilitiesWin.cpp #include <string> #include "UtilitiesWin.h" std::string UtilitiesWin::ParseString(std::string const& RawString) { // Magic happens here! // I'll put in a line of code to make it seem valid return ""; } So then elsewhere in my code I have this #include <string> #include "Utilities.h" void SomeProgram::SomeMethod() { Utilities *u = new Utilities(); StringData = u->ParseString(StringData); // StringData defined elsewhere } The compiler (Visual Studio 2008) is dying on the instance declaration c:\somepath\somecode.cpp(3) : error C2259: 'Utilities' : cannot instantiate abstract class due to following members: 'std::string Utilities::ParseString(const std::string &)' : is abstract c:\somepath\utilities.h(9) : see declaration of 'Utilities::ParseString' So in this case what I'm wanting to do is use the abstract class (Utilities) like an interface and have it know to go to the implemented version (UtilitiesWin). Obviously I'm doing something wrong but I'm not sure what. It occurs to me as I'm writing this that there's probably a crucial connection between the UtilitiesWin implementation of the Utilities abstract class that I've missed, but I'm not sure where. I mean, the following works #include <string> #include "UtilitiesWin.h" void SomeProgram::SomeMethod() { Utilities *u = new UtilitiesWin(); StringData = u->ParseString(StringData); // StringData defined elsewhere } but it means I'd have to conditionally go through the different versions later (i.e., UtilitiesMac(), UtilitiesLinux(), etc.) What have I missed here?

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • NHibernate : delete error

    - by MadSeb
    Hi, Model: I have a model in which one Installation can contain multiple "Computer Systems". Database: The table Installations has two columns Name and Description. The table ComputerSystems has three columsn Name, Description and InstallationId. Mappings: I have the following mapping for Installation: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="myProgram.Core" namespace="myProgram"> <class name="Installation" table="Installations" lazy="true"> <id name="Id" column="Id" type="int"> <generator class="native" /> </id> <property name="Name" column="Name" type="string" not-null="true" /> <property name="Description" column="Description" type="string" /> <bag name="ComputerSystems" inverse="true" lazy="true" cascade="all-delete-orphan"> <key column="InstallationId" /> <one-to-many class="ComputerSystem" /> </bag> </class> </hibernate-mapping> I have the following mapping for ComputerSystem: <?xml version="1.0" encoding="utf-8"?> <id name="Id" column="ID" type="int"> <generator class="native" /> </id> <property name="Name" column="Name" type="string" not-null="true" /> <property name="Description" column="Description" type="string" /> <many-to-one name="Installation" column="InstallationID" cascade="save-update" not-null="true" /> Classes: The Installation class is: public class Installation { public virtual String Description { get; set; } public virtual String Name { get; set; } public virtual IList<ComputerSystem> ComputerSystems { get { if (_computerSystemItems== null) { lock (this) { if (_computerSystemItems== null) _computerSystemItems= new List<ComputerSystem>(); } } return _computerSystemItems; } set { _computerSystemItems= value; } } protected IList<ComputerSystem> _computerSystemItems; public Installation() { Description = ""; Name= ""; } } The ComputerSystem class is: public class ComputerSystem { public virtual String Name { get; set; } public virtual String Description { get; set; } public virtual Installation Installation { get; set; } } The issue is that I get an error when trying to delete an installation that contains a ComputerSystem. The error is: "deleted object would be re-saved by cascade (remove deleted object from associations)". Can anyone help ? Regards, Seb

    Read the article

  • How do I integrate a new MVC C# Project with an existing Web Forms VB.NET Web Application Project?

    - by Jordan Rieger
    We have a corporate website with a large amount of dynamic business application pages (e.g. Shopping Cart, Helpdesk, Product/Service management, Reporting, etc.) The site was built as an ASP.Net Web Application Project (WAP). Our systems have evolved over the years to use .NET 4.5 and various custom business logic DLLs (written in a mix of C# and VB.NET). However, the site itself is still using VB.NET Web Forms. We now have done a few side projects in MVC 4 using Razor/C#, and we want to use this framework for new pages on the main corporate site going forward. What would be the easiest way to achieve this? I found this nice list of steps to integrate MVC 4 into an existing Web Forms app. The problem is that because our existing app is a VB.NET WAP, it compiles into a single DLL, and .NET allows only one language per DLL. The site is way too big for us to contemplate converting it to C# all at once (yes, I've looked at the conversion tools, and they're good, but even 99% accuracy would leave us a huge amount of cleanup work.) I thought about converting the existing WAP into a Web Site Project (WSP) which does allow mixing languages and then following the steps above, but after a few pages of Google results, I couldn't find any steps for converting a WAP to WSP. (Plenty of sites offer the reverse steps: converting a WSP to a WAP.) Another idea I had was to create a completely separate MVC project, and then somehow squish them together into the same folder structure, where they would share the bin folder but compile to separate DLL's. I have no idea if this is possible, because certain files would collide (e.g. Global.asax, web.config, etc.) Finally, I can imagine a compromise solution where we keep all the MVC stuff in its own separate application under a subfolder of the main solution. We already use our own custom session state solution, so it wouldn't be difficult to pass data between the old site to the new pages. Which of the ideas above do you think makes the most sense for us? Is there another solution that I'm missing?

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >