Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 628/2399 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • XmlDocument.InnerXml is null, but InnerText is not

    - by Adam Neal
    I'm using XmlDocument and XmlElement to build a simple (but large) XML document that looks something like: <Widgets> <Widget> <Stuff>foo</Stuff> <MoreStuff>bar</MoreStuff>...lots more child nodes </Widget> <Widget>...lots more Widget nodes </Widgets> My problem is that when I'm done building the XML, the XmlDocument.InnerXml is null, but the InnerText still shows all the text of all the child nodes. Has anyone ever seen a problem like this before? What kind of input data would cause these symptoms? I expected the XmlDocument to just throw an exception if it was given bad data. Note: I'm pretty sure this is related to the input data as I can only reproduce it against certain data sets. I also tried escaping the data with SecurityElement.Escape but it made no difference.

    Read the article

  • Type patterns and generic classes in Haskell

    - by finnsson
    I'm trying to understand type patterns and generic classes in Haskell but can't seem to get it. Could someone explain it in laymen's terms? In [1] I've read that "To apply functions generically to all data types, we view data types in a uniform manner: except for basic predefined types such as Float, IO, and ?, every Haskell data type can be viewed as a labeled sum of possibly labeled products." and then Unit, :*: and :+: are mentioned. Are all data types in Haskell automatically versions of the above mentioned and if so how do I figure out how a specific data type is represented in terms of :*:, etc? The users guide for generic classes (ch. 7.16) at haskell.org doesn't mention the predefined types but shouldn't they be handled in every function if the type patterns should be exhaustive? [1] Comparing Approaches to Generic Programming in Haskell, Ralf Hinze, Johan Jeuring, and Andres Löh

    Read the article

  • How do you print a limited number of characters?

    - by Mike Pateras
    Sorry to put a post up about something so simple, but I don't see what I'm doing wrong here. char data[1024]; DWORD numRead; ReadFile(handle, data, 1024, &numRead, NULL); if (numRead > 0) printf(data, "%.5s"); My intention with the above is to read data from a file, and then only print out 5 characters. However, it prints out all 1024 characters, which is contrary to what I'm reading here. The goal, of course, is to do something like: printf(data, "%.*s", numRead); What am I doing wrong here?

    Read the article

  • JQuery.ajax success function returns empty

    - by viatropos
    I have a very basic AJAX function in JQuery: $.ajax({ url: "http://www.google.com", dataType: "html", success: function(data) { alert(data); } }); But the data is always an empty string, no matter what url I go to... Why is that? I am running this locally at http://localhost:3000, and am using JQuery 1.4.2. If I make a local response, however, like this: $.ajax({ url: "http://localhost:3000/test", dataType: "html", success: function(data) { alert(data); } }); ...it returns the html page at that address. What am I missing here?

    Read the article

  • Perl script to print out cars model and car color

    - by Gary Liggons
    I am tying to create a perl script to printout car models and colors, and the data is below. I want to know if there is anyway to make the car model heading a field so that I can print it any time I want to? the data below is a csv file. the way I want the data to look on a report is below as well This is how the data looks* Chevy blue,1978,Washington brown,1989,Dallas black,2001,Queens white,2003,Manhattan Toyota red,2003,Bronx green,2004,Queens brown,2002,Brooklyn black,1999,Harlem ****This is how I am trying to get the data to look in a report**** Car Model:Toyota Color:Red Year:2002 City: Queens

    Read the article

  • FileReference and HttpService Browse Image Modify it then Upload it

    - by user177787
    Hello, I am trying to do an image uploader, user can: - browse local file with button.browse - select one and save it as a FileReference. - then we do FileReference.load() then bind the data to our image control. - after we make a rotation on it and change the data of image. - and to finish we upload it to a server. To change the data of image i get the matrix of the displayed image and transform it then i re-use the new matrix and bind it to my old image: private function TurnImage():void { //Turn it var m:Matrix = _img.transform.matrix; rotateImage(m); _img.transform.matrix = m; } Now the mater is that i really don't know how to send the data as a file to my server cause its not stored in the FileReference and data inside FileReference is readOnly so we can't change it or create a new, so i can't use .upload();. Then i tried HttpService.send but i can't figure out how you send a file and not a mxml.

    Read the article

  • Parsing dbpedia JSON in Python

    - by givp
    Hello, I'm trying to get my head around the dbpedia JSON schema and can't figure out an efficient way of extracting a specific node: This is what dbpedia gives me: http://dbpedia.org/data/Ceramic_art.json I've got the whole thing as a JSON object in Python but don't really understand how to get the english abstract from this data. I've gotten this far: u = "http://dbpedia.org/data/Ceramic_art.json" data = urlfetch.fetch(url=u) json_data = json.loads(data.content) for j in json_data["http://dbpedia.org/resource/Ceramic_art"]: if(j == "http://dbpedia.org/ontology/abstract"): print "it's here" Not sure how to proceed from here. As you can see there are multiple languages. I need to get the english abstract. Thanks for your help, g

    Read the article

  • Error opening streamreader because file is used by another process

    - by Geri Langlois
    I am developing an application to read an excel spreadsheet, validate the data and then map it to a sql table. The process is to read the file via a streamreader, validate the data, manually make corrections to the excel spreadsheet, validate again -- repeat this process until all data validates. If the excel spreadsheet is open, then when I attempt to read the data via a streamreader I get an error, "The process cannot access the file ... because it is being used by another process." Is there a way to remove the lock or otherwise read the data into a streamreader without having to open and close excel each time?

    Read the article

  • [PHP] Condition for array with string as keys

    - by Kel
    My PL/SQL procedure returns a cursor. It always returns data. I fetch (oci_fetch_assoc) it and save it in an array. If results were found the keys of the array will be strings. If the cursor didn't find data, it will return value 0, thus the key of the array will be numeric. while($data = oci_fetch_assoc($cursor)){ if(!isset($data[0])){ ... } ... ... } What's the best way to check that the array is not just 0, but contains data? Thanks

    Read the article

  • Why Is my json-object from AJAX not understood by javascript, even with 'json' dataType?

    - by pete
    My js code simply gets a json object from my server, but I think it should be automatically parsed and turned into an object with properties, yet it's not allowing access properly. $.ajax({ type: 'POST', url: '/misc/json-sample.js', data: {href: path}, // THIS IS THE POST DATA THAT IS PASSED IN TO THE DRUPAL MENU CALL TO GET THE MENU... dataType: 'json', success: function (datax) { if (datax.debug) { alert('Debug data: ' + datax.debug); } else { alert('No debug data: ' + datax.toSource() ) ; } The /misc/json-sample.js file is: [ { "path": "examplemodule/parent1/child1/grandchild1", "title": "First grandchild option", "children": false } ] (I have also been trying to return that object from drupal as follows, and the same results.) Drupal version of misc/json-sample.js: $items[] = array( 'path' = 'examplemodule/parent1/child1/grandchild1', 'title' = t('First grandchild option'), 'debug' = t('debug me!'), 'children' = FALSE ); print drupal_to_js($items); What happens (in FF, which has the toSource() capability) is the alert with 'No debug data: [ { "path": "examplemodule/parent1/child1/grandchild1", "title": "First grandchild option", "children": false } ] Thanks

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Is it dangerous to store user-enterable text into a hidden form via javascript?

    - by KallDrexx
    In my asp.net MVC application I am using in place editors to allow users to edit fields without having a standard form view. Unfortunately, since I am using Linq to Sql combined with my data mapping layer I cannot just update one field at a time and instead need to send all fields over at once. So the solution I came up with was to store all my model fields into hidden fields, and provide span tags that contain the visible data (these span tags become editable due to my jquery plugin). When a user triggers a save of their edits of a field, jquery then takes their value and places it in the hidden form, and sends the whole form to the server to commit via ajax. When the data goes into the hidden field originally (page load) and into the span tags the data is properly encoded, but upon the user changing the data in the contenteditable span field, I just run $("#hiddenfield").val($("#spanfield").html(); Am I opening any holes this method? Obviously the server also properly encodes stuff prior to database entry.

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • Get all Select Elements in a Form by referencing $(this) instead of $("form select")

    - by DaveDev
    Hi Guys I'm currently getting all the Select elements that exist in a form with the following: $("form").submit(function(event) { // gather data var data = GetSelectData($("form select")); // do submit $.post($(this).attr("action"), data, ..etc) }); Instead of passing in $("form select"), is there a way I can say something like $(this).children('select') // this doesn't work, btw to get all the select elements that exist within the context of the form the submit event is executing for? This will allow me to reduce my code to the following, moving all the functionality into a common function: $("form").submit(function(event) { GatherDataAndSubmit($(this)); }); function GatherDataAndSubmit(obj) { var data = GetSelectData(obj.children('select')); $.post(obj.attr("action"), data, ..etc) } Thanks Dave

    Read the article

  • Is it OK to pass SQLCommand as a parameter?

    - by TooFat
    I have a Business Layer that passes a Conn string and a SQLCommand to a Data Layer like so public void PopulateLocalData() { System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand(); cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "usp_PopulateServiceSurveyLocal"; DataLayer.DataProvider.ExecSQL(ConnString, cmd); } The DataLayer then just executes the sql like so public static int ExecSQL(string sqlConnString, System.Data.SqlClient.SqlCommand cmd) { int rowsAffected; using (SqlConnection conn = new SqlConnection(sqlConnString)) { conn.Open(); cmd.Connection = conn; rowsAffected = cmd.ExecuteNonQuery(); cmd.Dispose(); } return rowsAffected; } Is it OK for me to pass the SQLCommand as a parameter like this or is there a better more accepted way of doing it. One of my concerns is if an error occurs when executing the query the cmd.dispose line will never execute. Does that mean it will continue to use up memory that will never be released?

    Read the article

  • SQLite doesn't have booleans or date-times.

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • Django json serialization problem

    - by codingJoe
    I am having difficulty serializing a django object. The problem is that there are foreign keys. I want the serialization to have data from the referenced object, not just the index. For example, I would like the sponsor data field to say "sponsor.last_name, sponsor.first_name" rather than "13". How can I fix my serialization? json data: {"totalCount":"2","activities":[{"pk": 1, "model": "app.activity", "fields": {"activity_date": "2010-12-20", "description": "my activity", "sponsor": 13, "location": 1, .... model code: class Activity(models.Model): activity_date = models.DateField() description = models.CharField(max_length=200) sponsor = models.ForeignKey(Sponsor) location = models.ForeignKey(Location) class Sponsor(models.Model): last_name = models.CharField(max_length=20) first_name= models.CharField(max_length=20) specialty = models.CharField(max_length=100) class Location(models.Model): location_num = models.IntegerField(primary_key=True) location_name = models.CharField(max_length=100) def activityJSON(request): activities = Activity.objects.all() total = activities.count() activities_json = serializers.serialize("json", activities) data = "{\"totalCount\":\"%s\",\"activities\":%s}" % (total, activities_json) return HttpResponse(data, mimetype="application/json")

    Read the article

  • LINQ - Select Statement - The null value cannot be assigned to a member with type System.Int32 which

    - by thiag0
    I am trying to achieve the following... _4S.NJB_Request request = (from r in db.NJB_Requests where r.RequestId == referenceId select r).Take(1).SingleOrDefault(); Getting the following exception... Message: The null value cannot be assigned to a member with type System.Int32 which is a non-nullable value type. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at DAL.SqlDataProvider.MarkNJBPCRequestAsComplete(Int32 referenceId, Int32 processState) I have verified that 'referenceId' does have a value. Anyone know why this would happen in a select statement? Thanks!

    Read the article

  • Using Interfaces in an action signature of ASP.NET MVC controller

    - by Dmitry Borovsky
    Hello, I want to use interface in Action signature. So I've tried make own ModelBinder by deriving DefaultModelBinder: public class InterfaceBinder<T> : DefaultModelBinder where T: new() { protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) { return base.CreateModel(controllerContext, bindingContext, typeof(T)); } } public interface IFoo { string Data { get; set; } } public class Foo: IFoo /*other interfaces*/ { /* a lot of other methods and properties*/ public Bar Data{get;set;} string IFoo.Data { get{return Data.ToString()}; set{Data = new Bar(value)}; } } public class MegaController: Controller { public ActionResult Process([ModelBinder(typeof(InterfaceBinder<Foo>))]IFoo foo){/*bla-bla-bla*/} } But it doesn't work. Does anybody have idea how release this behaviour? And yes, I know that I can make my own implementation of IModelBinder, but I'm looking for easier way.

    Read the article

  • Context is Everything

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Context is Everything How many times have you have you asked a question only to hear an answer like “Well, it depends. What exactly are you trying to do?”.  There are times that raw information can’t tell us what we need to know without putting it in a larger context. Let's take a real world example.  If I'm a maintenance planner trying to figure out which assets should be replaced during my next maintenance window, I'm going to go to my Asset Management System.  I can get it to spit out a list of assets that have failed several times over the last year.  But what are these assets connected to?  Is there any safety consequences to shutting off this pipeline to do the work?  Is some other work that's planned going to conflict with replacing this asset?  Several of these questions can't be answered by simply spitting out a list of asset IDs.  The maintenance planner will have to reference a diagram of the plant to answer several of these questions. This is precisely the idea behind Augmented Business Visualization. An Augmented Business Visualization (ABV) solution is one where your structured data (enterprise application data) and your unstructured data (documents, contracts, floor plans, designs, etc.) come together to allow you to make better decisions.  Essentially we're showing your business data into its context. AutoVue allows you to create ABV solutions by integrating your enterprise application with AutoVue’s hotspot framework. Hotspots can be defined for your document. Users can click these hotspots to trigger actions in your enterprise app. Similarly, the enterprise app can highlight the hotspots in your document based on its business data, creating a visual dashboard of your business data in the context of your document. ABV is not new. We introduced the hotspot framework in AutoVue 20.1 with text hotspots. Any text in a PDF or 2D CAD drawing could be turned into a hotspot. In 20.2 we have enhanced this to include 2 new types of hotspots: 3D and regional hotspots. 3D hotspots allow you to turn 3D parts into hotspots. Hotspots can be defined based on the attributes of the part, so you can create hotspots based on part numbers, material, date of delivery, etc.  Regional hotspots allow an administrator to define rectangular regions on any PDF, image, or 2D CAD drawing. This is perfect for cases where the document you’re using either doesn’t have text in it (a JPG or TIFF for example) or if you want to define hotspots that don’t correspond to the text in the document. There are lots of possible uses for AutoVue hotspots.  A great demonstration of how our hotspot capabilities can help add context to enterprise data in the Energy sector can be found in the following AutoVue movies: Maintenance Planning in the Energy Sector - Watch it Now Capital Construction Project Management in the Energy Sector  -  Watch it Now Commissioning and Handover Process for the Energy Sector  -  Watch it Now

    Read the article

  • MS Access Mark Duplicates in order of appearance - using the function RankOfDup: (SELECT Count(*) ...)

    - by veska stoyanova
    I'm trying to create Ranking that shows the sequence of agreements for the two fields Customers and Agreements. The number for agreements must be unique, whereas customers can repeat. The formula RankOfDup: (SELECT Count(*) FROM Data a WHERE a.customer=Data.customer And a.agreement >= Data.agreement) Works beautifully but after this query with columns Agreement, Customer and RankofDup, I need to create crosstab that transposes the RankofDub. It works when I make the table first and then create query but my data is too large so I'm trying to put the select query with the ranking in a crosstab query. However, when I try to do this Access gives error message that microsoft jet ... doesn't recognise Data.customer? Any ideas how I can fix this?

    Read the article

  • Python or matplotlib limitation error

    - by Werner
    Hi, I wrote an algorithm using python and matplotlib that generates histograms from some text input data. When the number of data input is approx. greater than 15000, I get in the (append) line of my code: mydata = [] for i in range(len(data)): mydata.append(string.atof(data[i])) the error: Traceback (most recent call last): File "get_histogram_picture.py", line 25, in <module> mydata.append(string.atof(data[i])) File "/usr/lib/python2.6/string.py", line 388, in atof return _float(s) ValueError: invalid literal for float(): -a can it be an error in python ? What is the solution ? Thanks

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • Chart Filtering

    - by Tim Dexter
    Interesting question from a colleague this week. Can you add a filter to a chart to just show a specific set of data? In an RTF template, you need to do a little finagling in the chart definition. In an online template, a couple of clicks and you're done. RTF Build your chart as you would normally to include all the data to start with. Now flip to the Advanced tab to see the code behind the chart. Its not very pretty but with a little effort you can get it looking a little more friendly. Here's my chart showing employees and their salaries. <Graph depthAngle="50" depthRadius="8" seriesEffect="SE_AUTO_GRADIENT"> <LegendArea visible="true"/>  <Title text="Executive Department Only" visible="true" horizontalAlignment="CENTER"/>  <LocalGridData colCount="{count(.//G_2)}" rowCount="1">   <RowLabels>    <Label>SALARY</Label>   </RowLabels>   <ColLabels>    <xsl:for-each select=".//G_2">     <Label><xsl:value-of select="EMP_NAME"/></Label>    </xsl:for-each>   </ColLabels>   <DataValues>    <RowData>     <xsl:for-each select=".//G_2">      <Cell><xsl:value-of select="SALARY"/></Cell>     </xsl:for-each>    </RowData>   </DataValues>  </LocalGridData> </Graph> Note the emboldened text. Its currently grabbing all values in the G_2 level of the data. We can use an XPATH expression to filter the data to the set we want to see. In my case I want to only see the employees that are in the Executive department. My  data is structured thus:   <DATA_DS>     <G_1>         <DEPARTMENT_NAME>Accounting</DEPARTMENT_NAME>         <G_2>             <MANAGER>Higgins</MANAGER>             <EMPLOYEE_ID>206</EMPLOYEE_ID>             <HIRE_DATE>2002-06-07T00:00:00.000-04:00</HIRE_DATE>             <SALARY>8300</SALARY>             <JOB_TITLE>Public Accountant</JOB_TITLE>             <PARAS>11000</PARAS>             <EMP_NAME>William Gietz</EMP_NAME>         </G_2> So the XPATH expression Im going to use to limit the data to the Executive department would be .//G_2[../DEPARTMENT_NAME='Executive'] Note the ../ moves the parser up the XML tree to be able to test the DEPARTMENT_NAME value. I added this XPATH expression to the three instances that need it ColCount, ColLabels and RowData. Its simple enough to do. Testing your XPATH expression is easier to do using a table of data. Please note, as soon as you make changes to the chart code. Going back to the Builder tab, you'll find that everything is grayed out. I recommend you make all the changes you can via the chart dialog before updating the code. Online Template Implementing the filter is much simpler, there is a dialog box to help you out. Add you chart and fill out the various data points you want to show. then hit the Filter item in the ribbon above the chart. That will pop the filter dialog box where you can then add a filter to the chart.   You can add multiple filters if needed and of course you can use the Manage Filters button to re-open and edit the filters. Pretty straightforward stuff!

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >