Search Results

Search found 3604 results on 145 pages for 'bigced 21'.

Page 99/145 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Is there an easy way to type in common math symbols?

    - by srcspider
    Disclaimer: I'm sure someone is going to moan about easy-of-use, for the purpose of this question consider readability to be the only factor that matters So I found this site that converts to easting northing, it's not really important what that even means but here's how the piece of javascript looks. /** * Convert Ordnance Survey grid reference easting/northing coordinate to (OSGB36) latitude/longitude * * @param {OsGridRef} gridref - easting/northing to be converted to latitude/longitude * @returns {LatLonE} latitude/longitude (in OSGB36) of supplied grid reference */ OsGridRef.osGridToLatLong = function(gridref) { var E = gridref.easting; var N = gridref.northing; var a = 6377563.396, b = 6356256.909; // Airy 1830 major & minor semi-axes var F0 = 0.9996012717; // NatGrid scale factor on central meridian var f0 = 49*Math.PI/180, ?0 = -2*Math.PI/180; // NatGrid true origin var N0 = -100000, E0 = 400000; // northing & easting of true origin, metres var e2 = 1 - (b*b)/(a*a); // eccentricity squared var n = (a-b)/(a+b), n2 = n*n, n3 = n*n*n; // n, n², n³ var f=f0, M=0; do { f = (N-N0-M)/(a*F0) + f; var Ma = (1 + n + (5/4)*n2 + (5/4)*n3) * (f-f0); var Mb = (3*n + 3*n*n + (21/8)*n3) * Math.sin(f-f0) * Math.cos(f+f0); var Mc = ((15/8)*n2 + (15/8)*n3) * Math.sin(2*(f-f0)) * Math.cos(2*(f+f0)); var Md = (35/24)*n3 * Math.sin(3*(f-f0)) * Math.cos(3*(f+f0)); M = b * F0 * (Ma - Mb + Mc - Md); // meridional arc } while (N-N0-M >= 0.00001); // ie until < 0.01mm var cosf = Math.cos(f), sinf = Math.sin(f); var ? = a*F0/Math.sqrt(1-e2*sinf*sinf); // nu = transverse radius of curvature var ? = a*F0*(1-e2)/Math.pow(1-e2*sinf*sinf, 1.5); // rho = meridional radius of curvature var ?2 = ?/?-1; // eta = ? var tanf = Math.tan(f); var tan2f = tanf*tanf, tan4f = tan2f*tan2f, tan6f = tan4f*tan2f; var secf = 1/cosf; var ?3 = ?*?*?, ?5 = ?3*?*?, ?7 = ?5*?*?; var VII = tanf/(2*?*?); var VIII = tanf/(24*?*?3)*(5+3*tan2f+?2-9*tan2f*?2); var IX = tanf/(720*?*?5)*(61+90*tan2f+45*tan4f); var X = secf/?; var XI = secf/(6*?3)*(?/?+2*tan2f); var XII = secf/(120*?5)*(5+28*tan2f+24*tan4f); var XIIA = secf/(5040*?7)*(61+662*tan2f+1320*tan4f+720*tan6f); var dE = (E-E0), dE2 = dE*dE, dE3 = dE2*dE, dE4 = dE2*dE2, dE5 = dE3*dE2, dE6 = dE4*dE2, dE7 = dE5*dE2; f = f - VII*dE2 + VIII*dE4 - IX*dE6; var ? = ?0 + X*dE - XI*dE3 + XII*dE5 - XIIA*dE7; return new LatLonE(f.toDegrees(), ?.toDegrees(), GeoParams.datum.OSGB36); } I found that to be a really nice way of writing an algorythm, at least as far as redability is concerned. Is there any way to easily write the special symbols. And by easily write I mean NOT copy/paste them.

    Read the article

  • your AdSense account poses a risk of generating invalid activity

    - by Karington
    i received a mail from the adsense team saying: I am not an adsense expert, im actually quite new to it. I spent a lot of time on my site http://www.media1.rs, its a news aggregator with tons of options. In the meantime i discovered the double click service that had a good option to turn on google ads when you don't have any other running so i joined up for google adsense with my company account. Everything went smooth until one day (21.Jul.2011) i got an email... Hello, After reviewing our records, we've determined that your AdSense account poses a risk of generating invalid activity. Because we have a responsibility to protect our AdWords advertisers from inflated costs due to invalid activity, we've found it necessary to disable your AdSense account. Your outstanding balance and Google's share of the revenue will both be fully refunded back to the affected advertisers. Please understand that we need to take such steps to maintain the effectiveness of Google's advertising system, particularly the advertiser-publisher relationship. We understand the inconvenience that this may cause you, and we thank you in advance for your understanding and cooperation. If you have any questions or concerns about the actions we've taken, how you can appeal this decision, or invalid activity in general, you can find more information by visiting http://www.google.com/adsense/support/bin/answer.py?answer=57153. Sincerely, The Google AdSense Team At first i didn't have any idea why... but then it came to me that it was maybe the auto refresh script we had because we publish news very very often and it would be useful for visitors... but i removed it immediately after i got the mail... Then i thought it might be my friends clicking thinking that that will help me (i didn't tell them to do it and don't know if they did) or something like that but than it couldn't be that because everyone can organize 10 people and get anyone who is a start-up banned? right? Anyway i filled out the form that was on the answers page with the previously removed script and got this from them: Hello, Thank you for your appeal. We appreciate the additional information you've provided, as well as your continued interest in the AdSense program. However, after thoroughly re-reviewing your account data and taking your feedback into consideration, our specialists have confirmed that we're unable to reinstate your AdSense account. As a reminder, if you have any questions or concerns about your account, the actions we've taken, or invalid activity in general, you can find more information by visiting http://www.google.com/adsense/support/bin/answer.py?answer=57153. I do understand them that they have to keep things secret in a way but i don't know what I'm supposed to do now? Is there a check list that i can go through and re-apply? Where do i re-apply on the same form? Please help as we are a small company and cant really have a budget for hiring a specialist + don't know any also... p.s. the current ads on the site are my own through doubleclick... Thanks in advance! Best, Karington

    Read the article

  • NFJS Central Iowa Software Symposium Des Moines Trip Report

    - by reza_rahman
    As some of you may be aware, I recently joined the well-respected US based No Fluff Just Stuff (NFJS) Tour. If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. The NFJS Central Iowa Software Symposium was held August 8 - 10 in Des Moines. The attendance at the event and my sessions was moderate by comparison to some of the other shows. It is one of the few events of it's kind that take place this part the country so it is extremely important. I had five talks total over two days, more or less back-to-back. The first one was my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. The second talk (on the second day) was our flagship Java EE 7/8 talk. Currently the talk is basically about Java EE 7 but I'm slowly evolving the talk to transform it into a Java EE 8 talk as we move forward. The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third was my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE". The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. I finishd off the event with a talk titled Building Java HTML5/WebSocket Applications with JSR 356. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. My next NFJS show is the Greater Atlanta Software Symposium on September 12 - 14. Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: September 12 - 14, Atlanta. September 19 - 21, Boston. October 17 - 19, Seattle. I hope you'll take this opportunity to get some updates on Java EE as well as the other useful content on the tour?

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • A Reusable Builder Class for .NET testing

    - by Liam McLennan
    When writing tests, other than end-to-end integration tests, we often need to construct test data objects. Of course this can be done using the class’s constructor and manually configuring the object, but to get many objects into a valid state soon becomes a large percentage of the testing effort. After many years of painstakingly creating builders for each of my domain objects I have finally become lazy enough to bother to write a generic, reusable builder class for .NET. To use it you instantiate a instance of the builder and configuring it with a builder method for each class you wish it to be able to build. The builder method should require no parameters and should return a new instance of the type in a default, valid state. In other words the builder method should be a Func<TypeToBeBuilt>. The best way to make this clear is with an example. In my application I have the following domain classes that I want to be able to use in my tests: public class Person { public string Name { get; set; } public int Age { get; set; } public bool IsAndroid { get; set; } } public class Building { public string Street { get; set; } public Person Manager { get; set; } } The builder for this domain is created like so: build = new Builder(); build.Configure(new Dictionary<Type, Func<object>> { {typeof(Building), () => new Building {Street = "Queen St", Manager = build.A<Person>()}}, {typeof(Person), () => new Person {Name = "Eugene", Age = 21}} }); Note how Building depends on Person, even though the person builder method is not defined yet. Now in a test I can retrieve a valid object from the builder: var person = build.A<Person>(); If I need a class in a customised state I can supply an Action<TypeToBeBuilt> to mutate the object post construction: var person = build.A<Person>(p => p.Age = 99); The power and efficiency of this approach becomes apparent when your tests require larger and more complex objects than Person and Building. When I get some time I intend to implement the same functionality in Javascript and Ruby. Here is the full source of the Builder class: public class Builder { private Dictionary<Type, Func<object>> defaults; public void Configure(Dictionary<Type, Func<object>> defaults) { this.defaults = defaults; } public T A<T>() { if (!defaults.ContainsKey(typeof(T))) throw new ArgumentException("No object of type " + typeof(T).Name + " has been configured with the builder."); T o = (T)defaults[typeof(T)](); return o; } public T A<T>(Action<T> customisation) { T o = A<T>(); customisation(o); return o; } }

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • Error while updating

    - by Alwin Doss
    I get the following error while updating ubuntu 12.04 LTS installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 430284 files and directories currently installed.) Preparing to replace libxml2-dev 2.7.8.dfsg-5.1ubuntu4.1 (using .../libxml2-dev_2.7.8.dfsg-5.1ubuntu4.2_i386.deb) ... Unpacking replacement libxml2-dev ... Preparing to replace libxml2 2.7.8.dfsg-5.1ubuntu4.1 (using .../libxml2_2.7.8.dfsg-5.1ubuntu4.2_i386.deb) ... Unpacking replacement libxml2 ... Preparing to replace gstreamer0.10-plugins-bad 0.10.22.3-2ubuntu2 (using .../gstreamer0.10-plugins-bad_0.10.22.3-2ubuntu2.1_i386.deb) ... Unpacking replacement gstreamer0.10-plugins-bad ... Preparing to replace libgstreamer-plugins-bad0.10-0 0.10.22.3-2ubuntu2 (using .../libgstreamer-plugins-bad0.10-0_0.10.22.3-2ubuntu2.1_i386.deb) ... Unpacking replacement libgstreamer-plugins-bad0.10-0 ... Preparing to replace ubuntu-keyring 2011.11.21 (using .../ubuntu-keyring_2011.11.21.1_all.deb) ... /var/lib/dpkg/info/samba4.postinst: 14: /var/lib/dpkg/info/samba4.postinst: /usr/share/samba/setoption.pl: Permission denied dpkg: error processing samba4 (--configure): subprocess installed post-installation script returned error exit status 126

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • Class design issue

    - by user2865206
    I'm new to OOP and a lot of times I become stumped in situations similar to this example: Task: Generate an XML document that contains information about a person. Assume the information is readily available in a database. Here is an example of the structure: <Person> <Name>John Doe</Name> <Age>21</Age> <Address> <Street>100 Main St.</Street> <City>Sylvania</City> <State>OH</State> </Address> <Relatives> <Parents> <Mother> <Name>Jane Doe</Name> </Mother> <Father> <Name>John Doe Sr.</Name> </Father> </Parents> <Siblings> <Brother> <Name>Jeff Doe</Name> </Brother> <Brother> <Name>Steven Doe</Name> </Brother> </Siblings> </Relatives> </Person> Ok lets create a class for each tag (ie: Person, Name, Age, Address) Lets assume each class is only responsible for itself and the elements directly contained Each class will know (have defined by default) the classes that are directly contained within them Each class will have a process() function that will add itself and its childeren to the XML document we are creating When a child is drawn, as in the previous line, we will have them call process() as well Now we are in a recursive loop where each object draws their childeren until all are drawn But what if only some of the tags need to be drawn, and the rest are optional? Some are optional based on if the data exists (if we have it, we must draw it), and some are optional based on the preferences of the user generating the document How do we make sure each object has the data it needs to draw itself and it's childeren? We can pass down a massive array through every object, but that seems shitty doesnt it? We could have each object query the database for it, but thats a lot of queries, and how does it know what it's query is? What if we want to get rid of a tag later? There is no way to reference them. I've been thinking about this for 20 hours now. I feel like I am misunderstanding a design principle or am just approaching this all wrong. How would you go about programming something like this? I suppose this problem could apply to any senario where there are classes that create other classes, but the classes created need information to run. How do I get the information to them in a way that doesn't seem fucky? Thanks for all of your time, this has been kicking my ass.

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • Changes in the Maven Embedded GlassFish plugin

    - by Romain Grecourt
    The plugin changed its Maven coordinates (a.k.a GAV) over time:  version <= 3.1.1 available under org.glassfish:maven-glassfish-embedded-plugin version >= 3.1.2 available under org.glassfish.embedded:maven-glassfish-embedded-plugin The goal “glassfish-embedded:run” has changed its way of reading the deployment configuration in the latest version: 4.0.Projects using previous versions of the plugin will stop working with this goal. Here is an example of the “old behavior”: 1 2 3 4 5 6 7 8 9 10 11 12 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>3.1.2.2</version> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> </plugin> The new behavior is as follow: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>4.0</version> <configuration> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> <executions> <execution> <goals> <goal>deploy</goal> </goals> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> </configuration> </execution> </executions> </plugin> The new version looks for execution of the deploy goal and the associated configuration, when running the goal ‘run’. Both would allow you to run the latest version of the glassfish-embedded jar, you’d only need to add it as a plugin dependency: 1 2 3 4 5 6 7 8 9 10 <plugin> [...] <dependencies> <dependency> <groupId>org.glassfish.main.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>4.0</version> </dependency> </dependencies> </plugin>

    Read the article

  • ??GoldenGate?LAG???

    - by Liu Maclean(???)
    GGSCI????LAG?? ????????????????Oracle?redo????online redo logfile? ? Replicat????????????????? ???????? ????,?????????????????LAG; ????????????????REPLICAT??apply???????????? OGG????RANGE??????????,????????REPLICATE??APPLY? OGG??MAXTRANSOPS???????? LAG?????????: ?Extract?????redolog????TRAIL?REMOTE HOST ????datapump???extract trail????????????REMOTE HOST ?collector?????????????????LOCAL TRAIL ?REPLICAT??LOCAL TRAIL???????? ?????????GGSCI?INFO?STATUS??????LAG,???SEND ???,LAG?????LAG?????: INFO??????LAG???SEND??????????? INFO?????LAG???MANAGER????????checkpoint SEND <OBJECT>, lag???LAG???<OBJECT>???????????? LAG?????????????????Kilobytes??? ????LAG??? ????????????? ? EXTRACT/PUMP/REPLICAT???????? ?2?????????, ???? LAG???EXTRACT??????? ??EXTRACT/PUMP/REPLICAT??????????????? REAL TIME,???LAG????? ?????????????? ????????REDO LOG?????????,?LAG???ER???????,?????????????? ??????,STOP EXTRACT?????????????????LAG,????EXTRACT?????,??EXTRACT????????? ????REDO LOG???? ?EXTRACT??????????????????? GGSCI (XIANGBLI-CN) 27> stop load2 Sending STOP request to EXTRACT LOAD2 … Request processed. GGSCI (XIANGBLI-CN) 28> start load2 Sending START request to MANAGER … EXTRACT LOAD2 starting GGSCI (XIANGBLI-CN) 31> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:04:34 (updated 00:00:08 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:21:32  Seqno 44, RBA 13750272 SCN 0.1845479 (1845479) GGSCI (XIANGBLI-CN) 35> lag load2 Sending GETLAG request to EXTRACT LOAD2 … Last record lag: 130 seconds. At EOF, no more records to process. GGSCI (XIANGBLI-CN) 36> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:27:33  Seqno 44, RBA 13817856 SCN 0.1845671 (1845671) ?????? Last record lag ? Checkpoint Lag ???? EXTRACT/PUMP/REPLICAT ?????????????(catch up), ???? ?????????????GB?redo???,??????EXTRACT/PUMP/REPLICAT ????????? ???INFO?LAG???checkpoint?,????????????Long Running Transactions (LRTs),??????????COMMIT? ????????????????????????COMMIT?????? ????EXTRACT/PUMP/REPLICAT???????????????????????commit????? ??REPLICAT????MAXTRANSOPS ?????LAG?

    Read the article

  • Problems with jQuery getJSON using local files in Chrome

    - by Tauren
    I have a very simple test page that uses XHR requests with jQuery's $.getJSON and $.ajax methods. The same page works in some situations and not in others. Specificially, it doesn't work in Chrome on Ubuntu. I'm testing on Ubuntu 9.10 with Chrome 5.0.342.7 beta and Mac OSX 10.6.2 with Chrome 5.0.307.9 beta. It works correctly when files are installed on a web server from both Ubuntu/Chrome and Mac/Chrome (try it out here). It works correctly when files are installed on local hard drive in Mac/Chrome (accessed with file:///...). It FAILS when files are installed on local hard drive in Ubuntu/Chrome (access with file:///...). The small set of 3 files can be downloaded in a tar/gzip file from here: http://issues.tauren.com/testjson/testjson.tgz When it works, the Chrome console will say: XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:16Using getJSON index.html:21 Object result: "success" __proto__: Object index.html:22success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:29Using ajax with json dataType index.html:34 Object result: "success" __proto__: Object index.html:35success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:46Using ajax with text dataType index.html:51{"result":"success"} index.html:52undefined When it doesn't work, the Chrome console will show this: index.html:16Using getJSON index.html:21null index.html:22Uncaught TypeError: Cannot read property 'result' of null index.html:29Using ajax with json dataType index.html:34null index.html:35Uncaught TypeError: Cannot read property 'result' of null index.html:46Using ajax with text dataType index.html:51 index.html:52undefined Notice that it doesn't even show the XHR requests, although the success handler is run. I swear this was working previously in Ubuntu/Chrome, and am worried something got messed up. I already uninstalled and reinstalled Chrome, but that didn't help. Can someone try it out locally on your Ubuntu system and tell me if you have any troubles? Note that it seems to be working fine in Firefox.

    Read the article

  • Uncaught SyntaxError: Unexpected token ILLEGAL

    - by sathis
    May i know whats wrong inside this.I am new world of programing ..So if you help me it would be wonderful.The error comes on the line arr[${i.count-1}][1]=${employee.email}; Awaiting for your response.The entire Code as follows.. $(function() { var arr = new Array(); arr[0]=new Array(4); arr[0][0]=sathis; arr[0][1][email protected]; arr[0][2]=namakkal; arr[0][3]=21; arr[1]=new Array(4); arr[1][0]=ganesh; arr[1][1][email protected]; arr[1][2]=karaikudi; arr[1][3]=22; arr[2]=new Array(4); arr[2][0]=karthik; arr[2][1][email protected]; arr[2][2]=trichy; arr[2][3]=25; var str="<table><tr><th>Name</th><th>Email</th><th>City</th><th>Age</th></tr><tr><td>"; $("#emp_name").change(function() { var i=$(this).val(); str=str+arr[i-1][0]+"</td><td>"+arr[i-1][1]+"</td><td>"+arr[i-1][2]+"</td><td>"+arr[i-1][3]+"</td><tr></table>"; $("#viewer").html(str); alert(str); }); });

    Read the article

  • How do I filter out NaN FLOAT values in Teradata SQL?

    - by Paul Hooper
    With the Teradata database, it is possible to load values of NaN, -Inf, and +Inf into FLOAT columns through Java. Unfortunately, once those values get into the tables, they make life difficult when writing SQL that needs to filter them out. There is no IsNaN() function, nor can you "CAST ('NaN' as FLOAT)" and use an equality comparison. What I would like to do is, SELECT SUM(VAL**2) FROM DTM WHERE NOT ABS(VAL) > 1e+21 AND NOT VAL = CAST ('NaN' AS FLOAT) but that fails with error 2620, "The format or data contains a bad character.", specifically on the CAST. I've tried simply "... AND NOT VAL = 'NaN'", which also fails for a similar reason (3535, "A character string failed conversion to a numeric value."). I cannot seem to figure out how to represent NaN within the SQL statement. Even if I could represent NaN successfully in an SQL statement, I would be concerned that the comparison would fail. According to the IEEE 754 spec, NaN = NaN should evaluate to false. What I really seem to need is an IsNaN() function. Yet that function does not seem to exist.

    Read the article

  • Read array dump output and generates the correspondent XML file

    - by Christian
    Hi, The text below is the dump of a multidimensional array, dumped by the var_dump() PHP function. I need a Java function that reads a file with a content like this (attached) and returns it in XML. For a reference, in site http://pear.php.net/package/Var_Dump/ you can find the code (in PHP) that generates dumps in XML, so all neeeded logic is there (I think). I will be waiting for your feedback. Regards, Christian array(1) { ["Processo"]= array(60) { ["Sistema"]= string(6) "E-PROC" ["UF"]= string(2) "RS" ["DataConsulta"]= string(19) "11/05/2010 17:59:17" ["Processo"]= string(20) "50000135320104047100" ["NumRegistJudici"]= string(20) "50000135320104047100" ["IdProcesso"]= string(30) "711262958983115560390000000001" ["SeqProcesso"]= string(1) "1" ["Autuado"]= string(19) "08/01/2010 12:04:47" ["StatusProcesso"]= string(1) "M" ["ComSituacaoProcesso"]= string(2) "00" ["Situacao"]= string(9) "MOVIMENTO" ["IdClasseJudicial"]= string(10) "0000000112" ["DesClasse"]= string(18) "INQUÉRITO POLICIAL" ["CodClasse"]= string(6) "000120" ["SigClasse"]= string(3) "INQ" ["DesTipoInquerito"]= string(0) "" ["CodCompetencia"]= string(2) "21" ["IdLocalidadeJudicial"]= string(4) "7150" ["ClasseSigAutor"]= string(5) "AUTOR" ["ClasseDesAutor"]= string(5) "AUTOR" ["ClasseSigReu"]= string(7) "INDICDO" ["ClasseDesReu"]= string(9) "INDICIADO" ["ClasseCodReu"]= string(2) "64" ["TipoAcao"]= string(8) "Criminal" ["TipoProcessoJudicial"]= string(1) "2" ["CodAssuntoPrincipal"]= string(6) "051801" ["CodLocalidadeJudicial"]= string(2) "00" ["IdAssuntoPrincipal"]= string(4) "1504" ["IdLocalizadorOrgaoPrincipal"]= string(30) "711264420823128430420000000001" ["ChaveConsulta"]= string(12) "513009403710" ["NumAdministrativo"]= NULL ["Magistrado"]= string(28) "RICARDO HUMBERTO SILVA BORNE" ["IdOrgaoJuizo"]= string(9) "710000085" ["IdOrgaoJuizoOriginario"]= string(9) "710000085" ["DesOrgaoJuizo"]= string(45) "JUÍZO FED. DA 02A VF CRIMINAL DE PORTO ALEGRE" ["SigOrgaoJuizo"]= string(10) "RSPOACR02F" ["CodOrgaoJuizo"]= string(9) "RS0000085" ["IdOrgaoSecretaria"]= string(9) "710000084" ["DesOrgaoSecretaria"]= string(31) "02a VF CRIMINAL DE PORTO ALEGRE" ["SigOrgaoSecretaria"]= string(9) "RSPOACR02" ["CodOrgaoSecretaria"]= string(9) "RS0000084" ["IdSigilo"]= string(1) "0" ["IdUsuario"]= string(30) "711262951173995330420000000001" ["DesSigilo"]= string(10) "Sem Sigilo" ["Localizador"]= string(25) "EM TRÂMITE ENTRE PF E MPF" ["TotalCda"]= int(0) ["DesIpl"]= string(8) "012/2010" ["Assunto"]= array(1) { [0]= array(4) { ["IdAssuntoJudicial"]= string(4) "1504" ["SeqAssunto"]= string(1) "1" ["CodAssunto"]= string(6) "051801" ["DesAssunto"]= string(84) "Moeda Falsa / Assimilados (arts. 289 e parágrafos e 290), Crimes contra a Fé Pública" } } ["ParteAutor"]= array(1) { [0]= array(12) { ["IdPessoa"]= string(30) "771230778800100040000000000508" ["TipoPessoa"]= string(3) "ENT" ["Nome"]= string(15) "POLÍCIA FEDERAL" ["Identificacao"]= string(14) "79621439000191" ["SinPartePrincipal"]= string(1) "S" ["IdProcessoParte"]= string(30) "711262958983115560390000000002" ["IdProcessoParteAtributo"]= NULL ["IdRepresentacao"]= NULL ["TipoRepresentacao"]= NULL ["AtributosProcessoParte"]= NULL ["Relacao"]= NULL ["Procurador"]= array(6) { [0]= array(4) { ["Nome"]= string(25) "SOLON RAMOS CARDOSO FILHO" ["Sigla"]= string(13) "cor-sr-dpf-rs" ["IdUsuarioProcurador"]= string(30) "711262893271855450420000000001" ["TipoUsuario"]= string(3) "CPF" } [1]= array(4) { ["Nome"]= string(18) "LUCIANA IOP CECHIN" ["Sigla"]= string(11) "luciana.lic" ["IdUsuarioProcurador"]= string(30) "711262946806708880420000000001" ["TipoUsuario"]= string(3) "CPF" } [2]= array(4) { ["Nome"]= string(31) "ALEXANDRE DA SILVEIRA ISBARROLA" ["Sigla"]= string(15) "drcor-sr-dpf-rs" ["IdUsuarioProcurador"]= string(30) "711262949451860560420000000001" ["TipoUsuario"]= string(3) "CPF" } [3]= array(4) { ["Nome"]= string(24) "JUCÉLIA TERESINHA PISONI" ["Sigla"]= string(11) "jucelia.jtp" ["IdUsuarioProcurador"]= string(30) "711262950492275450420000000001" ["TipoUsuario"]= string(3) "CPF" } [4]= array(4) { ["Nome"]= string(32) "MARCOS ANTONIO SIQUEIRA PICININI" ["Sigla"]= string(13) "picinini.masp" ["IdUsuarioProcurador"]= string(30) "711262951173995330420000000001" ["TipoUsuario"]= string(3) "APF" } [5]= array(4) { ["Nome"]= string(20) "PRISCILLA BURLACENKO" ["Sigla"]= string(12) "priscilla.pb" ["IdUsuarioProcurador"]= string(30) "711262955631630740420000000001" ["TipoUsuario"]= string(3) "DPF" } } } } ["ParteReu"]= array(1) { [0]= array(11) { ["IdPessoa"]= string(30) "711262958983115560390000000001" ["TipoPessoa"]= string(2) "PF" ["Nome"]= string(8) "A APURAR" ["Identificacao"]= NULL ["SinPartePrincipal"]= string(1) "S" ["IdProcessoParte"]= string(30) "711262958983115560390000000001" ["IdProcessoParteAtributo"]= NULL ["IdRepresentacao"]= NULL ["TipoRepresentacao"]= NULL ["AtributosProcessoParte"]= NULL ["Relacao"]= NULL } } ["OutraParte"]= array(1) { [0]= array(10) { ["Nome"]= string(26) "MINISTÉRIO PÚBLICO FEDERAL" ["CodTipoParte"]= string(3) "114" ["DesTipoParte"]= string(3) "MPF" ["SinPolo"]= string(1) "N" ["Identificacao"]= string(13) "3636198000192" ["SinPartePrincipal"]= string(1) "N" ["IdProcessoParte"]= string(30) "711262958983115560390000000003" ["IdPessoa"]= string(30) "771230778800100040000000000217" ["TipoPessoa"]= string(3) "ENT" ["Procurador"]= array(1) { [0]= array(4) { ["Nome"]= string(25) "MARIA VALESCA DE MESQUITA" ["IdUsuarioProcurador"]= string(30) "711265220162198740420000000001" ["TipoUsuario"]= string(1) "P" ["Sigla"]= string(5) "pr528" } } } } ["DadoComplementar"]= array(6) { [0]= array(5) { ["DesDadoComplem"]= string(21) "Antecipação de Tutela" ["ValorDadoComplem"]= string(13) "Não Requerida" ["IdDadoComplementar"]= string(1) "1" ["NumIdProcessoDadoComplem"]= string(30) "711262958983115560390000000003" ["IdDadoComplementarValor"]= string(1) "4" } [1]= array(5) { ["DesDadoComplem"]= string(16) "Justiça Gratuita" ["ValorDadoComplem"]= string(13) "Não Requerida" ["IdDadoComplementar"]= string(1) "4" ["NumIdProcessoDadoComplem"]= string(30) "711262958983115560390000000001" ["IdDadoComplementarValor"]= string(1) "3" } [2]= array(5) { ["DesDadoComplem"]= string(15) "Petição Urgente" ["ValorDadoComplem"]= string(3) "Não" ["IdDadoComplementar"]= string(1) "5" ["NumIdProcessoDadoComplem"]= string(30) "711262958983115560390000000004" ["IdDadoComplementarValor"]= string(1) "2" } [3]= array(5) { ["DesDadoComplem"]= string(22) "Prioridade Atendimento" ["ValorDadoComplem"]= string(3) "Não" ["IdDadoComplementar"]= string(1) "2" ["NumIdProcessoDadoComplem"]= string(30) "711262958983115560390000000006" ["IdDadoComplementarValor"]= string(1) "2" } [4]= array(5) { ["DesDadoComplem"]= string(9) "Réu Preso" ["ValorDadoComplem"]= string(3) "Não" ["IdDadoComplementar"]= string(1) "6" ["NumIdProcessoDadoComplem"]= string(30) "711262958983115560390000000002" ["IdDadoComplementarValor"]= string(1) "2" } [5]= array(5) { ["DesDadoComplem"]= string(24) "Vista Ministério Público" ["ValorDadoComplem"]= string(3) "Sim" ["IdDadoComplementar"]= string(1) "3" ["NumIdProcessoDadoComplem"]= string(30) "711262958983115560390000000005" ["IdDadoComplementarValor"]= string(1) "1" } } ["SemPrazoAbrir"]= bool(true) ["Evento"]= array(8) { [0]= array(18) { ["IdProcessoEvento"]= string(30) "711269271039215440420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(3) "166" ["SeqEvento"]= string(1) "8" ["DataHora"]= string(19) "22/03/2010 12:19:16" ["SinExibeDesEvento"]= string(1) "S" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= string(7) "90 DIAS" ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(37) "PETIÇÃO PROTOCOLADA JUNTADA - 90 DIAS" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(7) "ap18785" ["idUsuario"]= string(30) "711263330517182580420000000001" ["DesPeticao"]= string(25) "DILAÇÃO DE PRAZO DEFERIDA" ["DescricaoCompleta"]= string(75) "PETIÇÃO PROTOCOLADA JUNTADA - 90 DIAS - DILAÇÃO DE PRAZO DEFERIDA - 90 DIAS" } [1]= array(18) { ["IdProcessoEvento"]= string(30) "711269032501923580420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(3) "166" ["SeqEvento"]= string(1) "7" ["DataHora"]= string(19) "19/03/2010 18:04:59" ["SinExibeDesEvento"]= string(1) "S" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= string(7) "90 DIAS" ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(37) "PETIÇÃO PROTOCOLADA JUNTADA - 90 DIAS" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(5) "pr700" ["idUsuario"]= string(30) "711262976146980920420000000002" ["DesPeticao"]= string(25) "DILAÇÃO DE PRAZO DEFERIDA" ["DescricaoCompleta"]= string(75) "PETIÇÃO PROTOCOLADA JUNTADA - 90 DIAS - DILAÇÃO DE PRAZO DEFERIDA - 90 DIAS" } [2]= array(19) { ["IdProcessoEvento"]= string(30) "711268077089625240420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(3) "165" ["SeqEvento"]= string(1) "6" ["DataHora"]= string(19) "08/03/2010 16:55:48" ["SinExibeDesEvento"]= string(1) "N" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= NULL ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(13) "picinini.masp" ["idUsuario"]= string(30) "711262951173995330420000000001" ["Documento"]= array(2) { [0]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711268077089625240420000000001" ["SeqDocumento"]= string(1) "1" ["SigTipoDocumento"]= string(4) "CERT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [1]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711268077089625240420000000002" ["SeqDocumento"]= string(1) "2" ["SigTipoDocumento"]= string(4) "DESP" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } } ["DesPeticao"]= string(26) "PEDIDO DE DILAÇÃO DE PRAZO" ["DescricaoCompleta"]= string(26) "PEDIDO DE DILAÇÃO DE PRAZO" } [3]= array(19) { ["IdProcessoEvento"]= string(30) "711267732906972600420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(2) "52" ["SeqEvento"]= string(1) "5" ["DataHora"]= string(19) "04/03/2010 17:20:29" ["SinExibeDesEvento"]= string(1) "N" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= NULL ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(5) "pr700" ["idUsuario"]= string(30) "711262976146980920420000000002" ["Documento"]= array(1) { [0]= array(6) { ["IdUsuario"]= string(30) "711262976146980920420000000002" ["IdDocumento"]= string(30) "711267732906972600420000000001" ["SeqDocumento"]= string(1) "1" ["SigTipoDocumento"]= string(3) "PET" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } } ["DesPeticao"]= string(7) "PETIÇÃO" ["DescricaoCompleta"]= string(7) "PETIÇÃO" } [4]= array(19) { ["IdProcessoEvento"]= string(30) "711265889365256290420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(3) "165" ["SeqEvento"]= string(1) "4" ["DataHora"]= string(19) "11/02/2010 09:59:04" ["SinExibeDesEvento"]= string(1) "N" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= NULL ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(13) "picinini.masp" ["idUsuario"]= string(30) "711262951173995330420000000001" ["Documento"]= array(2) { [0]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711265222866995860420000000001" ["SeqDocumento"]= string(1) "1" ["SigTipoDocumento"]= string(4) "PORT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [1]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711265222866995860420000000002" ["SeqDocumento"]= string(1) "2" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } } ["DesPeticao"]= string(26) "PEDIDO DE DILAÇÃO DE PRAZO" ["DescricaoCompleta"]= string(26) "PEDIDO DE DILAÇÃO DE PRAZO" } [5]= array(19) { ["IdProcessoEvento"]= string(30) "711263991150788270420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(2) "52" ["SeqEvento"]= string(1) "3" ["DataHora"]= string(19) "20/01/2010 10:50:05" ["SinExibeDesEvento"]= string(1) "N" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= NULL ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(13) "picinini.masp" ["idUsuario"]= string(30) "711262951173995330420000000001" ["Documento"]= array(4) { [0]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263991150788270420000000001" ["SeqDocumento"]= string(1) "1" ["SigTipoDocumento"]= string(4) "DECL" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [1]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263991150788270420000000002" ["SeqDocumento"]= string(1) "2" ["SigTipoDocumento"]= string(4) "DECL" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [2]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263991150788270420000000003" ["SeqDocumento"]= string(1) "3" ["SigTipoDocumento"]= string(4) "DECL" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [3]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263991150788270420000000004" ["SeqDocumento"]= string(1) "4" ["SigTipoDocumento"]= string(4) "DECL" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } } ["DesPeticao"]= string(7) "PETIÇÃO" ["DescricaoCompleta"]= string(7) "PETIÇÃO" } [6]= array(19) { ["IdProcessoEvento"]= string(30) "711263955058688620420000000001" ["IdEvento"]= string(3) "228" ["IdTipoPeticaoJudicial"]= string(2) "52" ["SeqEvento"]= string(1) "2" ["DataHora"]= string(19) "20/01/2010 00:40:39" ["SinExibeDesEvento"]= string(1) "N" ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "4" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= NULL ["DesEventoSemComplemento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["CodEvento"]= string(10) "0000000852" ["DesEvento"]= string(27) "PETIÇÃO PROTOCOLADA JUNTADA" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(13) "picinini.masp" ["idUsuario"]= string(30) "711262951173995330420000000001" ["Documento"]= array(6) { [0]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263229632249660420000000001" ["SeqDocumento"]= string(1) "1" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [1]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263229632249660420000000002" ["SeqDocumento"]= string(1) "2" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [2]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263229632249660420000000003" ["SeqDocumento"]= string(1) "3" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [3]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263229632249660420000000004" ["SeqDocumento"]= string(1) "4" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [4]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263229632249660420000000005" ["SeqDocumento"]= string(1) "5" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [5]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711263229632249660420000000006" ["SeqDocumento"]= string(1) "6" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } } ["DesPeticao"]= string(7) "PETIÇÃO" ["DescricaoCompleta"]= string(7) "PETIÇÃO" } [7]= array(19) { ["IdProcessoEvento"]= string(30) "711262958983115560390000000001" ["IdEvento"]= string(3) "430" ["IdTipoPeticaoJudicial"]= NULL ["SeqEvento"]= string(1) "1" ["DataHora"]= string(19) "08/01/2010 12:04:47" ["SinExibeDesEvento"]= NULL ["SinUsuarioInterno"]= string(1) "N" ["IdGrupoEvento"]= string(1) "0" ["SinVisualizaDocumentoExterno"]= string(1) "N" ["Complemento"]= NULL ["DesEventoSemComplemento"]= string(56) "Distribuição/Atribuição Ordinária por sorteio eletrônico" ["CodEvento"]= string(6) "030101" ["DesEvento"]= string(56) "Distribuição/Atribuição Ordinária por sorteio eletrônico" ["DesAlternativaEvento"]= NULL ["Usuario"]= string(13) "picinini.masp" ["idUsuario"]= string(30) "711262951173995330420000000001" ["Documento"]= array(4) { [0]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711262956008922510390000000001" ["SeqDocumento"]= string(1) "1" ["SigTipoDocumento"]= string(4) "PORT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [1]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711262956008922510390000000002" ["SeqDocumento"]= string(1) "2" ["SigTipoDocumento"]= string(4) "OFIC" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [2]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711262956008922510390000000003" ["SeqDocumento"]= string(1) "3" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } [3]= array(6) { ["IdUsuario"]= string(30) "711262951173995330420000000001" ["IdDocumento"]= string(30) "711262956008922510390000000004" ["SeqDocumento"]= string(1) "4" ["SigTipoDocumento"]= string(3) "OUT" ["IdSigilo"]= string(1) "0" ["DesSigilo"]= string(10) "Sem Sigilo" } } ["DesPeticao"]= string(56) "Distribuição/Atribuição Ordinária por sorteio eletrônico" ["DescricaoCompleta"]= string(56) "Distribuição/Atribuição Ordinária por sorteio eletrônico" } } ["ValCausa"]= string(4) "0.00" ["OrgaoJul"]= string(45) "JUÍZO FED. DA 02A VF CRIMINAL DE PORTO ALEGRE" ["CodOrgaoJul"]= string(9) "RS0000085" ["OrgaoColegiado"]= NULL ["CodOrgaoColegiado"]= NULL ["CodOrgaoColegiadoSecretaria"]= NULL } }

    Read the article

  • OS X, Mercurial and MediaTemple problem

    - by bschaeffer
    I've installed Mercurial per MT's knowledge base file here. Working with it server side using ssh from my Mac works fine. I can initialize repositories and the like, but pulling from the server or pushing from my Mac produces an error I don't understand. Here's what I get when call hg push from my local installation (hash marks represent my server number): remote: Traceback (most recent call last): remote: File "/home/#####/users/.home/data/mercurial-1.5/hg", line 27, in ? remote: mercurial.dispatch.run() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 16, in run remote: sys.exit(dispatch(sys.argv[1:])) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 21, in dispatch remote: u = _ui.ui() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/ui.py", line 38, in __init__ remote: for f in util.rcpath(): remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1200, in rcpath remote: _rcpath = os_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1174, in os_rcpath remote: path = system_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 41, in system_rcpath remote: path.extend(rcfiles(os.path.dirname(sys.argv[0]) + remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 30, in rcfiles remote: rcs.extend([os.path.join(rcdir, f) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 75, in __getattribute__ remote: self._load() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 47, in _load remote: mod = _origimport(head, globals, locals) remote: ImportError: No module named osutil abort: no suitable response from remote hg! Mercurial on my Mac is configured as follows [ui] username = John Smith editor = te -w remotecmd = ~/data/mercurial-1.5/hg My local single repo is configured as follows (hash marks represent my server number): [paths] default = ssh://mysite.com@s#####.gridserver.com/domains/mysite.com/html Mercurial on the server is configured with a just a username: [ui] username = John Smith The server .bash_profile is configured as follows (per the installation guide): # Aliases alias ls-a='ls -a -l' # Added this as suggested by the MediaTemple guide export PYTHONPATH=${HOME}/lib/python:$PYTHONPATH export PATH=${HOME}/bin:$PATH I understand this probably isn't a MediaTemple problem, but more likely an installation problem. I would really appreciate any assitance on this problem. Thanks in advance!

    Read the article

  • iPhone locationManager:didFailWithError problem when GPS disabled

    - by BenG
    So, I've followed other related threads, but for some reason I'm still having this error and I'm about ready to tear my hair out. I have implemented locationManager:didFailWithError to check and see if a user selects 'Don't Allow' to use the current location. -(void)locationManager:(CLLocationManager *)manager didFailWithError:(NSError *)error { NSLog(@"IN ERROR"); if ([error code] == kCLErrorDenied){ [manager stopUpdatingLocation]; } } However, the following error always appears when the user selects 'Don't Allow'...it's strange, especially the order that the text 'IN ERROR' appears. ERROR,Time,293420691.000,Function,"void CLClientHandleDaemonDataRegistration(__CLClient*, const CLDaemonCommToClientRegistration*, const __CFDictionary*)",server did not accept client registration 1 2010-04-19 21:44:51.000 testApp[1414:207] IN ERROR So, it's outputting this error even before it has a chance to get into the didFailWithError function. Does anyone have any ideas of what might be happening? The rest of the locationManager code is as follows: self.locationManager = [[[CLLocationManager alloc] init] autorelease]; locationManager.delegate = self; locationManager.desiredAccuracy = kCLLocationAccuracyKilometer; locationManager.distanceFilter = 2; [locationManager startUpdatingLocation];

    Read the article

  • AppFabric Cache - An existing connection was forcibly closed by the remote host

    - by Wallace Breza
    I'm trying to get AppFabric cache up and running on my local development environment. I have Windows Server AppFabric Beta 2 Refresh installed, and the cache cluster and host configured and started running on Windows 7 64-bit. I'm running my MVC2 website in a local IIS website under a v4.0 app pool in integrated mode. HostName : CachePort Service Name Service Status Version Info -------------------- ------------ -------------- ------------ SN-3TQHQL1:22233 AppFabricCachingService UP 1 [1,1][1,1] I have my web.config configured with the following: <configSections> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <hosts> <host name="SN-3TQHQL1" cachePort="22233" /> </hosts> </dataCacheClient> I'm getting an error when I attempt to initialize the DataCacheFactory: protected CacheService() { _cacheFactory = new DataCacheFactory(); <-- Error here _defaultCache = _cacheFactory.GetDefaultCache(); } I'm getting the ASP.NET yellow error screen with the following: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: Line 21: protected CacheService() Line 22: { Line 23: _cacheFactory = new DataCacheFactory(); Line 24: _defaultCache = _cacheFactory.GetDefaultCache(); Line 25: }

    Read the article

  • Error when installing plugin in Eclipse

    - by Derk
    When I try to install a plugin in Eclipse I get these error messages Registry event dispatcher Error notifying registry change listener. Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Has someone an idea what the cause of this problem could be? Thanks Edit: I see the Eclipse .log file has also a lot of new stack traces The first one is java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=nl_NL Framework arguments: -product org.eclipse.epp.package.jee.product Command-line arguments: -os win32 -ws win32 -arch x86_64 -product org.eclipse.epp.package.jee.product !ENTRY org.eclipse.equinox.registry 4 2 2010-05-06 21:04:31.236 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.equinox.registry". !STACK 0 org.eclipse.core.runtime.InvalidRegistryObjectException: Invalid registry object at org.eclipse.core.internal.registry.TemporaryObjectManager.getObject(TemporaryObjectManager.java:98) at org.eclipse.core.internal.registry.BaseExtensionPointHandle.getExtensionPoint(BaseExtensionPointHandle.java:106) at org.eclipse.core.internal.registry.BaseExtensionPointHandle.getContributor(BaseExtensionPointHandle.java:45) at org.eclipse.core.internal.registry.BaseExtensionPointHandle.getNamespace(BaseExtensionPointHandle.java:37) at org.eclipse.ui.internal.PopupMenuExtender.registryChanged(PopupMenuExtender.java:520) at org.eclipse.core.internal.registry.ExtensionRegistry$2.run(ExtensionRegistry.java:921) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.registry.ExtensionRegistry.processChangeEvent(ExtensionRegistry.java:919) at org.eclipse.core.runtime.spi.RegistryStrategy.processChangeEvent(RegistryStrategy.java:260) at org.eclipse.core.internal.registry.osgi.ExtensionEventDispatcherJob.run(ExtensionEventDispatcherJob.java:50) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >