Search Results

Search found 1480 results on 60 pages for 'jav 000'.

Page 6/60 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

  • What makes signed integers behave differently?

    - by 000
    In this example of x86_64 hex/disassembled code I see: 48B80000000000000000 mov rax, 0x0 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 943863860 Unsigned Int 943863860 Signed Int64 3472328296363079732 Unsigned Int64 3472328296363079732 Float 4.630555e-05 Double 1.39804332763832e-76 String 48B80000000000000000 which to me appears to have the same functionality as: 48C7C000000000 mov rax, 0x0 48C7C000000000 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 927152180 Unsigned Int 927152180 Signed Int64 3472328377950746676 Unsigned Int64 3472328377950746676 Float 1.163599e-05 Double 1.39806836023098e-76 String 48C7C000000000 How is the first example treated differently from the second example?

    Read the article

  • t-sql i am transforming data

    - by João Pedro Portelinha
    I am transforming data from this legacy table: MovTime (IdMov INT, IdPerson NVARCHAR(20), Date1 datetime, Type1 nvarchar(30) ) IdMov IdPerson Date1 Type ----------- -------------------- ----------------------- ------------------------------ 1 David 2012-06-01 09:00:00.000 Entered 2 David 2012-06-01 12:30:00.000 Exit 3 David 2012-06-01 14:00:00.000 Entered 4 David 2012-06-01 18:30:00.000 Exit 5 Kim 2012-06-02 09:00:00.000 Entered 6 Kim 2012-06-02 12:00:00.000 Exit ... I want the result to be the following: IdPerson Data Total Time ---------- ---------- ---------- David 2012-06-01 08:00:00 Kim 2012-06-02 03:00:00 T-SQL declare @WK_TABLE TABLE (IdMov INT, IdPerson NVARCHAR(20), Date1 datetime, Type1 nvarchar(30)) Insert into @WK_TABLE values(1,'David', '2012-06-01 09:00', 'Entered') Insert into @WK_TABLE values(2,'David', '2012-06-01 12:30', 'Exit') Insert into @WK_TABLE values(3,'David', '2012-06-01 14:00', 'Entered') Insert into @WK_TABLE values(4,'David', '2012-06-01 18:30', 'Exit') Insert into @WK_TABLE values(5,'Kim', '2012-06-02 09:00', 'Entered') Insert into @WK_TABLE values(6,'Kim', '2012-06-02 12:00', 'Exit') select * from @WK_TABLE Can someone help me?

    Read the article

  • SciPy interp1d results are different than MatLab interp1

    - by LMO
    I'm converting a MatLab program to Python, and I'm having problems understanding why scipy.interpolate.interp1d is giving different results than MatLab interp1. In MatLab the usage is slightly different: yi = interp1(x,Y,xi,'cubic') SciPy: f = interp1d(x,Y,kind='cubic') yi = f(xi) For a trivial example the results are the same: MatLab: interp1([0 1 2 3 4], [0 1 2 3 4],[1.5 2.5 3.5],'cubic') 1.5000 2.5000 3.5000 Python: interp1d([1,2,3,4],[1,2,3,4],kind='cubic')([1.5,2.5,3.5]) array([ 1.5, 2.5, 3.5]) But for a real-world example they are not the same: x = 0.0000e+000 2.1333e+001 3.2000e+001 1.6000e+004 2.1333e+004 2.3994e+004 Y = -6 -6 20 20 -6 -6 xi = 0.00000 11.72161 23.44322 35.16484... (2048 data points) Matlab: -6.0000e+000 -1.2330e+001 -3.7384e+000 ... 7.0235e+000 7.0028e+000 6.9821e+000 SciPy: array([[ -6.00000000e+00], [ -1.56304101e+01], [ -2.04908267e+00], ..., [ 1.64475576e+05], [ 8.28360759e+04], [ -5.99999999e+00]]) Any thoughts as to how to can get results that are consistent with MatLab?

    Read the article

  • Need guidance on a Google Map application that has to show 250 000 polylines.

    - by lucian.jp
    I am looking for advice for an application I am developing that uses Google Map. Summary: A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over. Information: I have a database of street segments with IDs, polyline points and centroid. The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show. Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008. We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application. Performance is a priority. Problems: Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal. There is no available .NET wrapper for the API v3 yet. Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.) Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used. Some thoughts: KML/KMZ: Pros: Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same. Cons: LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time. JavaScript file: Pros: JavaScript file can have encoded polyline and would significantly reduce the file to transfer. Cons: Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source. GeoRSS: This option isn't adapted for my needs I think, but I could be wrong. MapServer: I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without. Monopoly City Streets: The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city. Clustering: While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”. Arrow: I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays. Criteria: While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason. Questions: Should I create a web service to returns the street segments file or create an aspx page that return the file? Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay? Will I be able to display simple arrow on the polyline on the next version. In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream? This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.

    Read the article

  • Jmeter Exception response code :000 ,response message:Read timed out,for Java Web service?

    - by vipin k.
    I am testing a java web service(jax-ws),but whenever i am running the test i am getting response code as response message:Read timed out . And at tomcat serevr side i am getting exception : SEVERE: caught throwable ClientAbortException: java.net.SocketException: Connection reset by peer: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:358) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:354) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:381) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:370) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89) I found out that Read timed out exception may occur because of big size of SOAP response. But i am clue less because same web service i can access form an application.

    Read the article

  • Effective Method to Manage and Search Through 100,000+ Objects Instantly? (C#)

    - by Kirk
    I'm writing a media player for enthusiasts with large collections (over 100,000 tracks) and one of my main goals is speed in search. I would like to allow the user to perform a Google-esque search of their entire music collection based on these factors: Song Path and File Name Items in ID3 Tag (Title, Artist, Album, etc.) Lyrics What is the best way for me to store this data and search through it? Currently I am storing each track in an object and iterating over an array of these objects checking each of their variables for string matches based on given search text. I've run into problems though where my search is not effective because it is always a phrase search and I'm not sure how to make it more fuzzy. Would an internal DB like SQLlite be faster than this? Any ideas on how I should structure this system? I also need playlist persistence, so that when they close the app and open the app their same playlist loads immediately. How should I store the playlist information so it can load quickly when the application starts? Currently I am JSON encoding the entire playlist, storing it in a text file, and reading it into the ListView at runtime, but it is getting sluggish over 20,000 tracks. Thanks!

    Read the article

  • Why do I get "2010-01-01 00:00:00 +900" from "2010-12-31 15:00:00 +000"?

    - by mikezang
    I have NSDate, it will be shown as below if I used NSLog(@"%@", date.description); 2010-12-31 15:00:00 +0000 it will be shown as if I used NSLog(@"%@", [date descriptionWithLocale:[[NSLocale currentLocale] localeIdentifier]]); Saturday, January 1, 2011 12:00:00 AM Japan Standard Time But it will be show as below if I used NSLog(@"%@", [date formattedDateString]); 2010-01-01 00:00:00 +0900 Where do I make mistake? (NSString *)formattedDateString { return [self formattedStringUsingFormat:@"YYYY-MM-dd HH:mm:ss ZZZ"]; } (NSString *)formattedStringUsingFormat:(NSString *)dateFormat { NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:dateFormat]; NSString *ret = [formatter stringFromDate:self]; [formatter release]; return ret; }

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • $RECYCLE.BIN.trashinfo: Input/output error

    - by Parto
    I cannot delete .Trash-503 folder via GUI or terminal, it returns a $RECYCLE.BIN.trashinfo: Input/output error Even sudo rm -r or even an ls works in that directory. Check terminal output below: subroot@subroot:~$ cd /media/xxxxx/ subroot@subroot:/media/xxxxx$ rm .Trash-503/ rm: cannot remove `.Trash-503/': Is a directory subroot@subroot:/media/xxxxx$ rm -r .Trash-503/ rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error rm: cannot remove `.Trash-503/info': Directory not empty subroot@subroot:/media/BONJOUR$ sudo rm -r .Trash-503/ [sudo] password for subroot: rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error subroot@subroot:/media/xxxxx$ cd .Trash-503/ subroot@subroot:/media/xxxxx/.Trash-503$ ls info subroot@subroot:/media/xxxxx/.Trash-503$ cd info/ subroot@subroot:/media/xxxxx/.Trash-503/info$ ls ls: cannot access $RECYCLE.BIN.trashinfo: Input/output error ls: cannot access found.000.trashinfo: Input/output error found.000.trashinfo $RECYCLE.BIN.trashinfo subroot@subroot:/media/xxxxx/.Trash-503/info$ What's going on here and how can I delete this folder?

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • Which is the most practical way to add functionality to this piece of code?

    - by Adam Arold
    I'm writing an open source library which handles hexagonal grids. It mainly revolves around the HexagonalGrid and the Hexagon class. There is a HexagonalGridBuilder class which builds the grid which contains Hexagon objects. What I'm trying to achieve is to enable the user to add arbitrary data to each Hexagon. The interface looks like this: public interface Hexagon extends Serializable { // ... other methods not important in this context <T> void setSatelliteData(T data); <T> T getSatelliteData(); } So far so good. I'm writing another class however named HexagonalGridCalculator which adds some fancy pieces of computation to the library like calculating the shortest path between two Hexagons or calculating the line of sight around a Hexagon. My problem is that for those I need the user to supply some data for the Hexagon objects like the cost of passing through a Hexagon, or a boolean flag indicating whether the object is transparent/passable or not. My question is how should I implement this? My first idea was to write an interface like this: public interface HexagonData { void setTransparent(boolean isTransparent); void setPassable(boolean isPassable); void setPassageCost(int cost); } and make the user implement it but then it came to my mind that if I add any other functionality later all code will break for those who are using the old interface. So my next idea is to add annotations like @PassageCost, @IsTransparent and @IsPassable which can be added to fields and when I'm doing the computation I can look for the annotations in the satelliteData supplied by the user. This looks flexible enough if I take into account the possibility of later changes but it uses reflection. I have no benchmark of the costs of using annotations so I'm a bit in the dark here. I think that in 90-95% of the cases the efficiency is not important since most users wont't use a grid where this is significant but I can imagine someone trying to create a grid with a size of 5.000.000.000 X 5.000.000.000. So which path should I start walking on? Or are there some better alternatives? Note: These ideas are not implemented yet so I did not pay too much attention to good names.

    Read the article

  • Windows Phone : le MarkePlace va évoluer pour s'adapter à la croissance, Microsoft annonce une mise à jour pour l'été

    Windows Phone : le MarkePlace va évoluer pour s'adapter à la croissance Microsoft annonce une mise à jour pour l'été La galerie d'applications Windows Phone a subi ces derniers mois un taux de croissance effréné. Au cours des trois premiers mois de l'année, plus de 29 000 nouvelles applications ont été soumises sur le MarketPlace (soit une augmentation de 60%), permettant à la galerie de franchir le seuil symbolique des 80 000 applications. De plus, le nombre de galeries a doublé pendant ce temps, passant à 54. Le programme développeur compte désormais plus de 20 000 développeurs enregistrés. À ce rythme, la galerie va franchir la barre des 100 000 applications dans moins de 100 jours.

    Read the article

  • ??2???????·???·???????????????|Oracle Coherence|??????

    - by ???02
    ?????????????????????·?????????????????????????·??????????????????????????????·???????????????·???·??????????????????????????????????????????????????????·??????????????????????·????????????????????·?????????????????????????·?????????(???????????·??????)?????????·??????????????????????????????????????????????????????????/????????????????????????????????????????????????·?????????????(?2?1)??????????????????????????????·??????????????????????????????????????????·????????????????????????????????????????1???????????????????????????????????????????????????????????????????????????????????????????2???????????????????????????·????????????·????????????????????????????·????????????????????·?????????????·????????????·??????????·?????????????????????????????????????????????????????????????????????????????????????????????????3 ?????????????·??????????????·?????????????????????·????????????????????????????????????????????????????????????????????????????????????????·?????????????????????·?????????????????????????????·?????1?????????·???????????(???????????????????????????????????????????????????????????????????)?????????????????/??????????????????????2?????????·????????(???????????1???)????????·?????????????1????????????????????????????????????????????????????4??????????????????1,000?/??????????????????????????·????????????????????????·???????????????????????????1?????1,000????????????????????????????·????????????????????????1?????1???????1???????·????????????????????????????/??????????????2,000?/??????????????500?/??????????????????·???·???????????????????2??????????????2??????????1 ???????·????????????????????·????????????·??????????????????????????2???????????????????????????????? 2 ?????2?????????????????????????(??????Person)????????????????????????????????????·?????????????????123??

    Read the article

  • How to display Currency in Indian Numbering Format in PHP

    - by Somnath Muluk
    I have a question about formatting the Rupee currency (Indian Rupee - INR). For example, numbers here are represented as: 1 10 100 1,000 10,000 1,00,000 10,00,000 1,00,00,000 10,00,00,000 Refer Indian Numbering System I have to do with it PHP. I have saw this question Displaying Currency in Indian Numbering Format. But couldn't able to get it for PHP my problem. Update: How to use money_format() in indian currency format?

    Read the article

  • T-SQL: from rows to columns but not an actual pivot

    - by Matte
    Is there a T-SQL (SQL Server 2008R2) query to transform TABLE_1 into the expected resultset? TABLE_1 +----------+-------------------------+---------+------+ | IdDevice | Timestamp | M300 | M400 | +----------+-------------------------+---------+------+ | 3 | 2012-12-05 16:29:51.000 | 2357,69 | 520 | | 6 | 2012-12-05 16:29:51.000 | 1694,81 | 470 | | 1 | 2012-12-05 16:29:51.000 | 2046,33 | 111 | +----------+-------------------------+---------+------+ Expected resultset +-------------------------+---------+--------+---------+--------+---------+--------+ | Timestamp | 3_M300 | 3_M400 | 6_M300 | 6_M400 | 6_M300 | 6_M400 | +-------------------------+---------+--------+---------+--------+---------+--------+ | 2012-12-05 16:29:51.000 | 2357,69 | 520 | 1694,81 | 470 | 2046,33 | 111 | +-------------------------+---------+--------+---------+--------+---------+--------+

    Read the article

  • postgres too slow

    - by Killercode
    Hi, I'm doing massive tests on a Postgres database... so basically I have 2 table where I inserted 40.000.000 records on, let's say table1 and 80.000.000 on table2 after this I deleted all those records. Now if I do SELECT * FROM table1 it takes 199000ms ? I can't understand what's happening? can anyone help me on this?

    Read the article

  • Xpath question Xml Xpath

    - by Ibrar Afzal
    I need an xpath expression that would return the value of I need to get the value of this node. the value to extract is my xpath expression is //rates/rate[loantype='30-Year Fixed Rate'] The issue hre is that there are three value each node has a subtype element. Beside fileter for loantype I also need to filter for subtype. I am not sure how to do it in xpath. I have the following xml 40-Year Fixed Rate A 3 5.375 1.000 5.491 0 1 40-Year Fixed Rate B 5.500 0.500 5.579 0 1 40-Year Fixed Rate C 5.625 0.000 5.667 0 1 30-Year Fixed Rate A 3 5.000 1.000 5.134 0 1 30-Year Fixed Rate B 5.125 0.500 5.215 0 1 30-Year Fixed Rate C 5.250 0.000 5.297 0 1 20-Year Fixed Rate A 3 4.875 1.000 5.055 0 1 20-Year Fixed Rate B 5.000 0.500 5.121 0 1 20-Year Fixed Rate C 5.125 0.000 5.187 0 1 15-Year Fixed Rate A 3 4.250 1.000 4.467 0 1 15-Year Fixed Rate B 4.375 0.500 4.512 0 1 15-Year Fixed Rate C 4.500 0.000 4.570 0 1 10-Year Fixed Rate A 3 4.125 1.000 4.435 0 1 10-Year Fixed Rate B 4.250 0.500 4.454 0 1 10-Year Fixed Rate C 4.375 0.000 4.473 0 1 High-Balance 15-Year Fixed Rate D 3 4.250 1.000 4.461 0 1 High-Balance 15-Year Fixed Rate B 4.375 0.500 4.512 0 1 High-Balance 15-Year Fixed Rate C 4.500 0.000 4.563 0 1 High-Balance 30-Year Fixed Rate D 3 5.000 1.000 5.130 0 1 High-Balance 30-Year Fixed Rate B 5.125 0.500 5.211 0 1 High-Balance 30-Year Fixed Rate C 5.250 0.000 5.293 0 1 30-Year Fixed Rate Jumbo A 2 5.125 1.000 5.254 1 1 30-Year Fixed Rate Jumbo B 5.250 0.500 5.336 1 1 30-Year Fixed Rate Jumbo C 5.375 0.000 5.417 1 1 -- 15-Year Fixed Rate Jumbo A 2 5.000 1.000 5.220 1 1 15-Year Fixed Rate Jumbo B 5.125 0.500 5.270 1 1 15-Year Fixed Rate Jumbo C 5.250 0.000 5.320 1 1 -- 3/1 30-Year Adjustable Rate A 3 3.625 1.000 3.431 0 0 3/1 30-Year Adjustable Rate B 3.875 0.500 3.448 0 0 3/1 30-Year Adjustable Rate C 4.125 0.000 3.465 0 0 3/1 40-Year Adjustable Rate A 3 3.875 1.000 3.438 0 0 3/1 40-Year Adjustable Rate B 4.125 0.500 3.453 0 0 3/1 40-Year Adjustable Rate C 4.375 0.000 3.467 0 0 5/1 30-Year Adjustable Rate A 3 3.375 1.000 3.401 0 0 5/1 30-Year Adjustable Rate B 3.625 0.500 3.457 0 0 5/1 30-Year Adjustable Rate C 3.875 0.000 3.514 0 0 5/1 40-Year Adjustable Rate A 3 3.625 1.000 3.441 0 0 5/1 40-Year Adjustable Rate B 3.875 0.500 3.481 0 0 5/1 40-Year Adjustable Rate C 4.125 0.000 3.531 0 0 7/1 30-Year Adjustable Rate A 3 3.875 1.000 3.670 0 0 7/1 30-Year Adjustable Rate B 4.125 0.500 3.755 0 0 7/1 30-Year Adjustable Rate C 4.375 0.000 3.841 0 0 10/1 30-Year Adjustable Rate A 3 4.375 1.000 4.092 0 0 10/1 30-Year Adjustable Rate B 4.625 0.500 4.217 0 0 10/1 30-Year Adjustable Rate C 4.875 0.000 4.342 0 0 -- 2/2 ARM 30-Year (Purchase only) DH 5.250 0.000 3.709 0 0 -- High-Balance 5/1 30-Year Adjustable Rate D 3 3.375 1.000 3.366 0 0 High-Balance 5/1 30-Year Adjustable Rate B 3.625 0.500 3.404 0 0 High-Balance 5/1 30-Year Adjustable Rate C 3.875 0.000 3.454 0 0 High-Balance 7/1 30-Year Adjustable Rate D 3 3.875 1.000 3.670 0 0 High-Balance 7/1 30-Year Adjustable Rate B 4.125 0.500 3.755 0 0 High-Balance 7/1 30-Year Adjustable Rate C 4.375 0.000 3.841 0 0 3/1 30-Year Jumbo Adjustable Rate A 2 4.875 1.000 3.719 1 0 3/1 30-Year Jumbo Adjustable Rate B 5.000 0.500 3.708 1 0 3/1 30-Year Jumbo Adjustable Rate C 5.125 0.000 3.704 1 0 -- 3/1 40-Year Jumbo Adjustable Rate A 2 5.250 1.000 3.733 1 0 3/1 40-Year Jumbo Adjustable Rate B 5.375 0.500 3.727 1 0 3/1 40-Year Jumbo Adjustable Rate C 5.500 0.000 3.725 1 0 -- 5/1 30-Year Jumbo Adjustable Rate A 3 4.375 1.000 3.791 1 0 5/1 30-Year Jumbo Adjustable Rate B 4.500 0.500 3.803 1 0 5/1 30-Year Jumbo Adjustable Rate C 4.625 0.000 3.814 1 0 5/1 40-Year Jumbo Adjustable Rate A 2 5.000 1.000 3.922 1 0 5/1 40-Year Jumbo Adjustable Rate B 5.125 0.500 3.925 1 0 5/1 40-Year Jumbo Adjustable Rate C 5.250 0.000 3.936 1 0 -- 7/1 30-Year Jumbo Adjustable Rate A 3 4.950 1.000 4.261 1 0 7/1 30-Year Jumbo Adjustable Rate B 5.075 0.500 4.286 1 0 7/1 30-Year Jumbo Adjustable Rate C 5.200 0.000 4.311 1 0 2/2 ARM 30-Year Jumbo (Purchase only) DH 6.500 0.000 4.260 1 0 -- 30 Due in 7 Fixed Rate JUMBO Balloon A 6.375 1.000 6.613 1 0 30 Due in 7 Fixed Rate JUMBO Balloon B 6.500 0.500 6.625 1 0 40 due in 7 Fixed Rate offer1 5.250 0.000 5.374 0 0 1 40 Due in 7 Fixed Rate JUMBO Balloon offer2 6.500 0.000 6.625 1 0 1 Interest Only HELOC A To 80% LTV 3.250 0 1 Home Equity Loan - 7Yrs A Up to $100,000.00 Up to 75% LTV 6.000 6.000 0 2 Home Equity Loan - 7Yrs A $100,000.01 - $250,000.00 Up to 75% LTV 6.00 6.153 0 2 Home Equity Loan - 7Yrs A Up to $100,000.00 Up to 80% LTV 6.250 6.250 0 2 Home Equity Loan - 7Yrs A $100,000.01 - $250,000.00 Up to 80% LTV 6.25 6.403 0 2 Home Equity Loan - 7Yrs B $100,000.01 - $250,000.00 Up to 90% LTV 6.99 7.145 0 2 Home Equity Loan - 10,15Yrs C $5,000-$250,000.00 To 75% LTV 6.50 6.612 0 2 Home Equity Loan - 10,15Yrs C $5,000-$250,000.00 To 80% LTV 6.75 6.863 0 2 Home Equity Loan - 10,15Yrs D $5,000-$250,000.00 Up to 90% LTV 7.50 7.614 0 2 Home Equity Loan - 20Yrs E $5,000-$250,000.00 To 75% LTV 7.50 7.566 0 2 Home Equity Loan - 20Yrs E $5,000-$250,000.00 To 80% LTV 7.75 7.817 0 2 Home Equity Loan - 20Yrs F $5,000-$250,000.00 Up to 90% LTV 8.50 8.569 0 2 Equity Edge $5,000-$25,000.00 Up to 125% LTV 12.00 12.188 Current Index 0.350 Prime Index 3.250 03/26/2010

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • Failing Sata HDD

    - by DaveCol
    I think my HDD is fried... Could someone confirm or help me restore it? I was using Hardware RAID 1 Configuration [2 x 160GB SATA HDD] on a CentOS 4 Installation. All of a sudden I started seeing bad sectors on the second HDD which stopped being mirrored. I have removed the RAID array and have tested with SMART which showed the following error: 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 I have no clue what this means, or if I can recover from it. Could someone give me some ideas on how to fix this, or what HDD to get to replace this? Complete SMART report: Smartctl version 5.33 [i686-redhat-linux-gnu] Copyright (C) 2002-4 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: GB0160CAABV Serial Number: 6RX58NAA Firmware Version: HPG1 User Capacity: 160,041,885,696 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 4a Local Time is: Tue Oct 19 13:42:42 2010 COT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 433) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 54) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 253 006 Pre-fail Always - 0 3 Spin_Up_Time 0x0002 097 097 000 Old_age Always - 0 4 Start_Stop_Count 0x0033 100 100 020 Pre-fail Always - 152 5 Reallocated_Sector_Ct 0x0033 095 095 036 Pre-fail Always - 214 7 Seek_Error_Rate 0x000f 078 060 030 Pre-fail Always - 73109713 9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 15133 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0033 100 100 020 Pre-fail Always - 154 184 Unknown_Attribute 0x0032 038 038 000 Old_age Always - 62 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 189 Unknown_Attribute 0x0022 100 100 000 Old_age Always - 0 190 Unknown_Attribute 0x001a 061 055 000 Old_age Always - 656408615 194 Temperature_Celsius 0x0000 039 045 000 Old_age Offline - 39 (Lifetime Min/Max 0/22) 195 Hardware_ECC_Recovered 0x0032 070 059 000 Old_age Always - 12605265 197 Current_Pending_Sector 0x0000 100 100 000 Old_age Offline - 1 198 Offline_Uncorrectable 0x0000 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0000 200 200 000 Old_age Offline - 62 SMART Error Log Version: 1 ATA Error Count: 4645 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 4645 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 02 7b 86 b1 ea 00 00:38:52.796 READ DMA ec 03 45 00 00 00 a0 00 00:38:52.796 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:52.794 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 04 79 86 b1 ea 00 00:38:49.935 READ DMA Error 4644 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 04 79 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:49.935 READ DMA Error 4643 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4642 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4641 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15131 - # 2 Short offline Completed without error 00% 15131 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • Working with bytes and binary data in Python

    - by ignoramus
    Four consecutive bytes in a byte string together specify some value. However, only 7 bits in each byte are used; the most significant bit is ignored (that makes 28 bits altogether). So... b"\x00\x00\x02\x01" would be 000 0000 000 0000 000 0010 000 0001. Or, for the sake of legibility, 10 000 0001. That's the value the four bytes represent. But I want a decimal, so I do this: >>> 0b100000001 257 I can work all that out myself, but how would I incorporate it into a program?

    Read the article

  • Is this possible with Sql 2005 CTE?

    - by aenima1982
    I have been working on a query that will return a suggested start date for a manufacturing line based on due date and the number of minutes needed to complete the task. There is a calendar table(LINE_ID, CALENDAR_DATE, SCHEDULED_MINUTES) that displays per manufacturing line, the number of minutes scheduled for that day. Example: (Usually 3 shifts worth of time scheduled per day, no weekends but can vary) 1, 06/8/2010 00:00:00.000, 1440 1, 06/7/2010 00:00:00.000, 1440 1, 06/6/2010 00:00:00.000, 0 1, 06/5/2010 00:00:00.000, 0 1, 06/4/2010 00:00:00.000, 1440 In order to get the suggested start date, I need to start with the due date and iterate downward through the days until i have accumulated enough time to complete the task. My Question can something like this be done with CTE, or is this something that should be handled by a cursor. Or... am i just going about this the wrong way completely??

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >