Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 639/2416 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • SQLite doesn't have booleans or date-times.

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • Returned JSON is seemingly mixed up when using jQuery Ajax

    - by Niall Paterson
    I've a php script that has the following line: echo json_encode(array('success'=>'true','userid'=>$userid, 'data' => $array)); It returns the following: { "success": "true", "userid": "1", "data": [ { "id": "1", "name": "Trigger", "image": "", "subtitle": "", "description": "", "range1": null, "range2": null, "range3": null }, { "id": "2", "name": "DWS", "image": "", "subtitle": "", "description": "", "range1": null, "range2": null, "range3": null } ] } But when I call a jQuery ajax as below: $.ajax({ type: 'POST', url: 'url', crossDomain: true, data: {name: name}, success: function(success, userid, data) { if (success = true) { document.write(userid); document.write(success); } } }); The userid is 'success'. The actual success one works, its true. Is this malformed data being returned? Or is it simply my code? Thanks in advance, Niall

    Read the article

  • Why Is my json-object from AJAX not understood by javascript, even with 'json' dataType?

    - by pete
    My js code simply gets a json object from my server, but I think it should be automatically parsed and turned into an object with properties, yet it's not allowing access properly. $.ajax({ type: 'POST', url: '/misc/json-sample.js', data: {href: path}, // THIS IS THE POST DATA THAT IS PASSED IN TO THE DRUPAL MENU CALL TO GET THE MENU... dataType: 'json', success: function (datax) { if (datax.debug) { alert('Debug data: ' + datax.debug); } else { alert('No debug data: ' + datax.toSource() ) ; } The /misc/json-sample.js file is: [ { "path": "examplemodule/parent1/child1/grandchild1", "title": "First grandchild option", "children": false } ] (I have also been trying to return that object from drupal as follows, and the same results.) Drupal version of misc/json-sample.js: $items[] = array( 'path' = 'examplemodule/parent1/child1/grandchild1', 'title' = t('First grandchild option'), 'debug' = t('debug me!'), 'children' = FALSE ); print drupal_to_js($items); What happens (in FF, which has the toSource() capability) is the alert with 'No debug data: [ { "path": "examplemodule/parent1/child1/grandchild1", "title": "First grandchild option", "children": false } ] Thanks

    Read the article

  • How to set parameters in Python zlib module

    - by fagricipni
    I want to write a Python program that makes PNG files. My big problem is with generating the CRC and the data in the IDAT chunk. Python 2.6.4 does have a zlib module, but there are extra settings needed. The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module. As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification. Note that I can generate the rest of the PNG file and the data that is to be compressed for the IDAT chunk, I just don't know how to properly compress the image data for the IDAT chunk after implementing the initial filtering step.

    Read the article

  • Is it dangerous to store user-enterable text into a hidden form via javascript?

    - by KallDrexx
    In my asp.net MVC application I am using in place editors to allow users to edit fields without having a standard form view. Unfortunately, since I am using Linq to Sql combined with my data mapping layer I cannot just update one field at a time and instead need to send all fields over at once. So the solution I came up with was to store all my model fields into hidden fields, and provide span tags that contain the visible data (these span tags become editable due to my jquery plugin). When a user triggers a save of their edits of a field, jquery then takes their value and places it in the hidden form, and sends the whole form to the server to commit via ajax. When the data goes into the hidden field originally (page load) and into the span tags the data is properly encoded, but upon the user changing the data in the contenteditable span field, I just run $("#hiddenfield").val($("#spanfield").html(); Am I opening any holes this method? Obviously the server also properly encodes stuff prior to database entry.

    Read the article

  • Ubuntu 11.10 Intel controller VGA detected but no signal

    - by Fred Zimmerman
    Ubuntu 11.10 Dell vostrum Intel® Sandybridge Mobile x86/MMX/SSE2 Displays ... correctly detects and identifies Viewsonic 27" VGA monitor, but monitor says it's receiving no signal Plugged into another monitor (Sony 20"), same result. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) I've browsed these forums and tried everything that I can but nothing has worked. I need a stepwise troubleshooting plan.

    Read the article

  • Parsing dbpedia JSON in Python

    - by givp
    Hello, I'm trying to get my head around the dbpedia JSON schema and can't figure out an efficient way of extracting a specific node: This is what dbpedia gives me: http://dbpedia.org/data/Ceramic_art.json I've got the whole thing as a JSON object in Python but don't really understand how to get the english abstract from this data. I've gotten this far: u = "http://dbpedia.org/data/Ceramic_art.json" data = urlfetch.fetch(url=u) json_data = json.loads(data.content) for j in json_data["http://dbpedia.org/resource/Ceramic_art"]: if(j == "http://dbpedia.org/ontology/abstract"): print "it's here" Not sure how to proceed from here. As you can see there are multiple languages. I need to get the english abstract. Thanks for your help, g

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • MySQL get row closest to NOW()

    - by Christopher McCann
    I have a table with User data such as name, address etc and another table which has a paragraph of text about the user. The reason that they are separate is because we need to record all the old about data. So if the user changes their paragraph - the old one should still be stored. Each bit of about data has a primary key aboutMeID. What I want to do is have a join that pulls their name, address etc and the latest bit of aboutMe data from the other table. I am not sure though how I can order the join to only get the latest about me data. Can someone help?

    Read the article

  • PHP: best practice. Do i save html tags in DB or store the html entity value?

    - by Matt
    Hi Guys, I was wondering about which way i should do the following. I am using the tiny MCE wysiwyg editor which formats the users data with the right html tags. Now, i need to save this data entered into the editor into a database table. Should i encode the html tags to their corresponding entities when inserting into the DB, then when i get the data back from the table, not have the encode it for XSS purposes but i'd still have to use eval for the html tags to format the text. OR Do i save the html tags into the database, then when i get the data back from the database encode the html tags to their entities, but then as the tags will appear to the user, i'd have to use the eval function to actually format the data as it was entered. My thoughts are with the first option, i just wondered on what you guys thought. Thanks M

    Read the article

  • Monitoring C++ applications

    - by Scott A
    We're implementing a new centralized monitoring solution (Zenoss). Incorporating servers, networking, and Java programs is straightforward with SNMP and JMX. The question, however, is what are the best practices for monitoring and managing custom C++ applications in large, heterogenous (Solaris x86, RHEL Linux, Windows) environments? Possibilities I see are: Net SNMP Advantages single, central daemon on each server well-known standard easy integration into monitoring solutions we run Net SNMP daemons on our servers already Disadvantages: complex implementation (MIBs, Net SNMP library) new technology to introduce for the C++ developers rsyslog Advantages single, central daemon on each server well-known standard unknown integration into monitoring solutions (I know they can do alerts based on text, but how well would it work for sending telemetry like memory usage, queue depths, thread capacity, etc) simple implementation Disadvantages: possible integration issues somewhat new technology for C++ developers possible porting issues if we switch monitoring vendors probably involves coming up with an ad-hoc communication protocol (or using RFC5424 structured data; I don't know if Zenoss supports that without custom Zenpack coding) Embedded JMX (embed a JVM and use JNI) Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions somewhat simple implementation (we already do this today for other purposes) Disadvantages: complexity (JNI, thunking layer between native C++ and Java, basically writing the management code twice) possible stability problems requires a JVM in each process, using considerably more memory JMX is new technology for C++ developers each process has it's own JMX port (we run a lot of processes on each machine) Local JMX daemon, processes connect to it Advantages single, central daemon on each server consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions Disadvantages: complexity (basically writing the management code twice) need to find or write such a daemon need a protocol between the JMX daemon and the C++ process JMX is new technology for C++ developers CodeMesh JunC++ion Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions single, central daemon on each server when run in shared JVM mode somewhat simple implementation (requires code generation) Disadvantages: complexity (code generation, requires a GUI and several rounds of tweaking to produce the proxied code) possible JNI stability problems requires a JVM in each process, using considerably more memory (in embedded mode) Does not support Solaris x86 (deal breaker) Even if it did support Solaris x86, there are possible compiler compatibility issues (we use an odd combination of STLPort and Forte on Solaris each process has it's own JMX port when run in embedded mode (we run a lot of processes on each machine) possibly precludes a shared JMX server for non-C++ processes (?) Is there some reasonably standardized, simple solution I'm missing? Given no other reasonable solutions, which of these solutions is typically used for custom C++ programs? My gut feel is that Net SNMP is how people do this, but I'd like other's input and experience before I make a decision.

    Read the article

  • MS Access Mark Duplicates in order of appearance - using the function RankOfDup: (SELECT Count(*) ...)

    - by veska stoyanova
    I'm trying to create Ranking that shows the sequence of agreements for the two fields Customers and Agreements. The number for agreements must be unique, whereas customers can repeat. The formula RankOfDup: (SELECT Count(*) FROM Data a WHERE a.customer=Data.customer And a.agreement >= Data.agreement) Works beautifully but after this query with columns Agreement, Customer and RankofDup, I need to create crosstab that transposes the RankofDub. It works when I make the table first and then create query but my data is too large so I'm trying to put the select query with the ranking in a crosstab query. However, when I try to do this Access gives error message that microsoft jet ... doesn't recognise Data.customer? Any ideas how I can fix this?

    Read the article

  • CVE-2010-2761 Code Injection Vulnerability in Perl

    - by Umang_D
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2010-2761 Improper Control of Generation of Code ('Code Injection') vulnerability 4.3 Perl Solaris 9 Contact Support Solaris 10 SPARC : 146032-05 x86 : 146033-05 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Is it OK to pass SQLCommand as a parameter?

    - by TooFat
    I have a Business Layer that passes a Conn string and a SQLCommand to a Data Layer like so public void PopulateLocalData() { System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand(); cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "usp_PopulateServiceSurveyLocal"; DataLayer.DataProvider.ExecSQL(ConnString, cmd); } The DataLayer then just executes the sql like so public static int ExecSQL(string sqlConnString, System.Data.SqlClient.SqlCommand cmd) { int rowsAffected; using (SqlConnection conn = new SqlConnection(sqlConnString)) { conn.Open(); cmd.Connection = conn; rowsAffected = cmd.ExecuteNonQuery(); cmd.Dispose(); } return rowsAffected; } Is it OK for me to pass the SQLCommand as a parameter like this or is there a better more accepted way of doing it. One of my concerns is if an error occurs when executing the query the cmd.dispose line will never execute. Does that mean it will continue to use up memory that will never be released?

    Read the article

  • Django json serialization problem

    - by codingJoe
    I am having difficulty serializing a django object. The problem is that there are foreign keys. I want the serialization to have data from the referenced object, not just the index. For example, I would like the sponsor data field to say "sponsor.last_name, sponsor.first_name" rather than "13". How can I fix my serialization? json data: {"totalCount":"2","activities":[{"pk": 1, "model": "app.activity", "fields": {"activity_date": "2010-12-20", "description": "my activity", "sponsor": 13, "location": 1, .... model code: class Activity(models.Model): activity_date = models.DateField() description = models.CharField(max_length=200) sponsor = models.ForeignKey(Sponsor) location = models.ForeignKey(Location) class Sponsor(models.Model): last_name = models.CharField(max_length=20) first_name= models.CharField(max_length=20) specialty = models.CharField(max_length=100) class Location(models.Model): location_num = models.IntegerField(primary_key=True) location_name = models.CharField(max_length=100) def activityJSON(request): activities = Activity.objects.all() total = activities.count() activities_json = serializers.serialize("json", activities) data = "{\"totalCount\":\"%s\",\"activities\":%s}" % (total, activities_json) return HttpResponse(data, mimetype="application/json")

    Read the article

  • KScope 2014 Preview: Debra Lilley - The Learning Never Stops

    - by OTN ArchBeat
    When it comes to business travel Oracle ACE Director Debra Lilley never seems to stand still. The same can be said for her approach to sharpening her professional skills. In this interview Debra talks about the role ODTUG Kscope 2104 will play in her ongoing technical education, and about Kscope's efforts to get a new generation of IT professionals off to a great start. Connect with Debra Lilley

    Read the article

  • LINQ - Select Statement - The null value cannot be assigned to a member with type System.Int32 which

    - by thiag0
    I am trying to achieve the following... _4S.NJB_Request request = (from r in db.NJB_Requests where r.RequestId == referenceId select r).Take(1).SingleOrDefault(); Getting the following exception... Message: The null value cannot be assigned to a member with type System.Int32 which is a non-nullable value type. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at DAL.SqlDataProvider.MarkNJBPCRequestAsComplete(Int32 referenceId, Int32 processState) I have verified that 'referenceId' does have a value. Anyone know why this would happen in a select statement? Thanks!

    Read the article

  • Using Interfaces in an action signature of ASP.NET MVC controller

    - by Dmitry Borovsky
    Hello, I want to use interface in Action signature. So I've tried make own ModelBinder by deriving DefaultModelBinder: public class InterfaceBinder<T> : DefaultModelBinder where T: new() { protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) { return base.CreateModel(controllerContext, bindingContext, typeof(T)); } } public interface IFoo { string Data { get; set; } } public class Foo: IFoo /*other interfaces*/ { /* a lot of other methods and properties*/ public Bar Data{get;set;} string IFoo.Data { get{return Data.ToString()}; set{Data = new Bar(value)}; } } public class MegaController: Controller { public ActionResult Process([ModelBinder(typeof(InterfaceBinder<Foo>))]IFoo foo){/*bla-bla-bla*/} } But it doesn't work. Does anybody have idea how release this behaviour? And yes, I know that I can make my own implementation of IModelBinder, but I'm looking for easier way.

    Read the article

  • How to handle different roles with different previliges in PHP?

    - by user261002
    I would like to know how we can create different "user Roles" for different users in PHP. example: Administrator can create all types of users, add, view, manipulate data, delete managers, viewers, and workers, etc managers can only create, workers and viewers, can add and view data, workers can't create new users, but can only add data and view data, viewers can only view data that has been added to the DB by workers, managers and administrators. I though its better to use different sessions like : $_SESSION['admin'] $_SESSION['manager'] $_SESSION['worker'] $_SESSION['viewvers'] and for every page check which of them have a true or yes value, but I want to know how do they do it in real and big projects??? is there any other way???

    Read the article

  • Type patterns in Haskell

    - by finnsson
    I'm trying to compile a simple example of generic classes / type patterns (see http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-classes.html) in Haskell but it won't compile. Any ideas about what's wrong with the code would be helpful. According to the documentation there should be a module Generics with the data types Unit, :*:, and :+: but ghc (6.12.1) complaints about Not in scope: data constructor 'Unit' etc. It seems like there's a package instant-generics with the data types :*:, :+: and U but when I import that module (instead of Generics) I get the error Illegal type pattern in the generic bindings {myPrint _ = ""} The complete source code is import Generics.Instant class MyPrint a where myPrint :: a -> String myPrint {| U |} _ = "" myPrint {| a :*: b |} (x :*: y) = "" (show x) ++ ":*:" ++ (show y) myPrint {| a :+: b |} _ = "" data Foo = Foo String instance MyPrint a => MyPrint a main = myPrint $ Foo "hi" and I compile it using ghc --make Foo.hs -fglasgow-exts -XGenerics -XUndecidableInstances P.S. The module Generics export no data types, only the functions: canDoGenerics mkGenericRhs mkTyConGenericBinds validGenericInstanceType validGenericMethodType

    Read the article

  • 10 SEO Optimization Tips You Would Pay Money to Know

    "SEO", also known as search engine optimization is one of the many ways to build traffic to your website. While many internet marketers believe the best way to build massive traffic is to focus your efforts on one type of traffic generation method, whether PPC, SEO Optimization or viral traffic, it is always good to tap into other sources of traffic. This article will give you 10 SEO optimization tips that you can start implementing in your websites or blogs immediately.

    Read the article

  • Context is Everything

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Context is Everything How many times have you have you asked a question only to hear an answer like “Well, it depends. What exactly are you trying to do?”.  There are times that raw information can’t tell us what we need to know without putting it in a larger context. Let's take a real world example.  If I'm a maintenance planner trying to figure out which assets should be replaced during my next maintenance window, I'm going to go to my Asset Management System.  I can get it to spit out a list of assets that have failed several times over the last year.  But what are these assets connected to?  Is there any safety consequences to shutting off this pipeline to do the work?  Is some other work that's planned going to conflict with replacing this asset?  Several of these questions can't be answered by simply spitting out a list of asset IDs.  The maintenance planner will have to reference a diagram of the plant to answer several of these questions. This is precisely the idea behind Augmented Business Visualization. An Augmented Business Visualization (ABV) solution is one where your structured data (enterprise application data) and your unstructured data (documents, contracts, floor plans, designs, etc.) come together to allow you to make better decisions.  Essentially we're showing your business data into its context. AutoVue allows you to create ABV solutions by integrating your enterprise application with AutoVue’s hotspot framework. Hotspots can be defined for your document. Users can click these hotspots to trigger actions in your enterprise app. Similarly, the enterprise app can highlight the hotspots in your document based on its business data, creating a visual dashboard of your business data in the context of your document. ABV is not new. We introduced the hotspot framework in AutoVue 20.1 with text hotspots. Any text in a PDF or 2D CAD drawing could be turned into a hotspot. In 20.2 we have enhanced this to include 2 new types of hotspots: 3D and regional hotspots. 3D hotspots allow you to turn 3D parts into hotspots. Hotspots can be defined based on the attributes of the part, so you can create hotspots based on part numbers, material, date of delivery, etc.  Regional hotspots allow an administrator to define rectangular regions on any PDF, image, or 2D CAD drawing. This is perfect for cases where the document you’re using either doesn’t have text in it (a JPG or TIFF for example) or if you want to define hotspots that don’t correspond to the text in the document. There are lots of possible uses for AutoVue hotspots.  A great demonstration of how our hotspot capabilities can help add context to enterprise data in the Energy sector can be found in the following AutoVue movies: Maintenance Planning in the Energy Sector - Watch it Now Capital Construction Project Management in the Energy Sector  -  Watch it Now Commissioning and Handover Process for the Energy Sector  -  Watch it Now

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >