Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 632/2605 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • SQLite doesn't have booleans or date-times.

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • Parsing dbpedia JSON in Python

    - by givp
    Hello, I'm trying to get my head around the dbpedia JSON schema and can't figure out an efficient way of extracting a specific node: This is what dbpedia gives me: http://dbpedia.org/data/Ceramic_art.json I've got the whole thing as a JSON object in Python but don't really understand how to get the english abstract from this data. I've gotten this far: u = "http://dbpedia.org/data/Ceramic_art.json" data = urlfetch.fetch(url=u) json_data = json.loads(data.content) for j in json_data["http://dbpedia.org/resource/Ceramic_art"]: if(j == "http://dbpedia.org/ontology/abstract"): print "it's here" Not sure how to proceed from here. As you can see there are multiple languages. I need to get the english abstract. Thanks for your help, g

    Read the article

  • Is it dangerous to store user-enterable text into a hidden form via javascript?

    - by KallDrexx
    In my asp.net MVC application I am using in place editors to allow users to edit fields without having a standard form view. Unfortunately, since I am using Linq to Sql combined with my data mapping layer I cannot just update one field at a time and instead need to send all fields over at once. So the solution I came up with was to store all my model fields into hidden fields, and provide span tags that contain the visible data (these span tags become editable due to my jquery plugin). When a user triggers a save of their edits of a field, jquery then takes their value and places it in the hidden form, and sends the whole form to the server to commit via ajax. When the data goes into the hidden field originally (page load) and into the span tags the data is properly encoded, but upon the user changing the data in the contenteditable span field, I just run $("#hiddenfield").val($("#spanfield").html(); Am I opening any holes this method? Obviously the server also properly encodes stuff prior to database entry.

    Read the article

  • MS Access Mark Duplicates in order of appearance - using the function RankOfDup: (SELECT Count(*) ...)

    - by veska stoyanova
    I'm trying to create Ranking that shows the sequence of agreements for the two fields Customers and Agreements. The number for agreements must be unique, whereas customers can repeat. The formula RankOfDup: (SELECT Count(*) FROM Data a WHERE a.customer=Data.customer And a.agreement >= Data.agreement) Works beautifully but after this query with columns Agreement, Customer and RankofDup, I need to create crosstab that transposes the RankofDub. It works when I make the table first and then create query but my data is too large so I'm trying to put the select query with the ranking in a crosstab query. However, when I try to do this Access gives error message that microsoft jet ... doesn't recognise Data.customer? Any ideas how I can fix this?

    Read the article

  • Is it OK to pass SQLCommand as a parameter?

    - by TooFat
    I have a Business Layer that passes a Conn string and a SQLCommand to a Data Layer like so public void PopulateLocalData() { System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand(); cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "usp_PopulateServiceSurveyLocal"; DataLayer.DataProvider.ExecSQL(ConnString, cmd); } The DataLayer then just executes the sql like so public static int ExecSQL(string sqlConnString, System.Data.SqlClient.SqlCommand cmd) { int rowsAffected; using (SqlConnection conn = new SqlConnection(sqlConnString)) { conn.Open(); cmd.Connection = conn; rowsAffected = cmd.ExecuteNonQuery(); cmd.Dispose(); } return rowsAffected; } Is it OK for me to pass the SQLCommand as a parameter like this or is there a better more accepted way of doing it. One of my concerns is if an error occurs when executing the query the cmd.dispose line will never execute. Does that mean it will continue to use up memory that will never be released?

    Read the article

  • PHP: best practice. Do i save html tags in DB or store the html entity value?

    - by Matt
    Hi Guys, I was wondering about which way i should do the following. I am using the tiny MCE wysiwyg editor which formats the users data with the right html tags. Now, i need to save this data entered into the editor into a database table. Should i encode the html tags to their corresponding entities when inserting into the DB, then when i get the data back from the table, not have the encode it for XSS purposes but i'd still have to use eval for the html tags to format the text. OR Do i save the html tags into the database, then when i get the data back from the database encode the html tags to their entities, but then as the tags will appear to the user, i'd have to use the eval function to actually format the data as it was entered. My thoughts are with the first option, i just wondered on what you guys thought. Thanks M

    Read the article

  • Using Interfaces in an action signature of ASP.NET MVC controller

    - by Dmitry Borovsky
    Hello, I want to use interface in Action signature. So I've tried make own ModelBinder by deriving DefaultModelBinder: public class InterfaceBinder<T> : DefaultModelBinder where T: new() { protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) { return base.CreateModel(controllerContext, bindingContext, typeof(T)); } } public interface IFoo { string Data { get; set; } } public class Foo: IFoo /*other interfaces*/ { /* a lot of other methods and properties*/ public Bar Data{get;set;} string IFoo.Data { get{return Data.ToString()}; set{Data = new Bar(value)}; } } public class MegaController: Controller { public ActionResult Process([ModelBinder(typeof(InterfaceBinder<Foo>))]IFoo foo){/*bla-bla-bla*/} } But it doesn't work. Does anybody have idea how release this behaviour? And yes, I know that I can make my own implementation of IModelBinder, but I'm looking for easier way.

    Read the article

  • LINQ - Select Statement - The null value cannot be assigned to a member with type System.Int32 which

    - by thiag0
    I am trying to achieve the following... _4S.NJB_Request request = (from r in db.NJB_Requests where r.RequestId == referenceId select r).Take(1).SingleOrDefault(); Getting the following exception... Message: The null value cannot be assigned to a member with type System.Int32 which is a non-nullable value type. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at DAL.SqlDataProvider.MarkNJBPCRequestAsComplete(Int32 referenceId, Int32 processState) I have verified that 'referenceId' does have a value. Anyone know why this would happen in a select statement? Thanks!

    Read the article

  • MySQL get row closest to NOW()

    - by Christopher McCann
    I have a table with User data such as name, address etc and another table which has a paragraph of text about the user. The reason that they are separate is because we need to record all the old about data. So if the user changes their paragraph - the old one should still be stored. Each bit of about data has a primary key aboutMeID. What I want to do is have a join that pulls their name, address etc and the latest bit of aboutMe data from the other table. I am not sure though how I can order the join to only get the latest about me data. Can someone help?

    Read the article

  • Django json serialization problem

    - by codingJoe
    I am having difficulty serializing a django object. The problem is that there are foreign keys. I want the serialization to have data from the referenced object, not just the index. For example, I would like the sponsor data field to say "sponsor.last_name, sponsor.first_name" rather than "13". How can I fix my serialization? json data: {"totalCount":"2","activities":[{"pk": 1, "model": "app.activity", "fields": {"activity_date": "2010-12-20", "description": "my activity", "sponsor": 13, "location": 1, .... model code: class Activity(models.Model): activity_date = models.DateField() description = models.CharField(max_length=200) sponsor = models.ForeignKey(Sponsor) location = models.ForeignKey(Location) class Sponsor(models.Model): last_name = models.CharField(max_length=20) first_name= models.CharField(max_length=20) specialty = models.CharField(max_length=100) class Location(models.Model): location_num = models.IntegerField(primary_key=True) location_name = models.CharField(max_length=100) def activityJSON(request): activities = Activity.objects.all() total = activities.count() activities_json = serializers.serialize("json", activities) data = "{\"totalCount\":\"%s\",\"activities\":%s}" % (total, activities_json) return HttpResponse(data, mimetype="application/json")

    Read the article

  • Type patterns in Haskell

    - by finnsson
    I'm trying to compile a simple example of generic classes / type patterns (see http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-classes.html) in Haskell but it won't compile. Any ideas about what's wrong with the code would be helpful. According to the documentation there should be a module Generics with the data types Unit, :*:, and :+: but ghc (6.12.1) complaints about Not in scope: data constructor 'Unit' etc. It seems like there's a package instant-generics with the data types :*:, :+: and U but when I import that module (instead of Generics) I get the error Illegal type pattern in the generic bindings {myPrint _ = ""} The complete source code is import Generics.Instant class MyPrint a where myPrint :: a -> String myPrint {| U |} _ = "" myPrint {| a :*: b |} (x :*: y) = "" (show x) ++ ":*:" ++ (show y) myPrint {| a :+: b |} _ = "" data Foo = Foo String instance MyPrint a => MyPrint a main = myPrint $ Foo "hi" and I compile it using ghc --make Foo.hs -fglasgow-exts -XGenerics -XUndecidableInstances P.S. The module Generics export no data types, only the functions: canDoGenerics mkGenericRhs mkTyConGenericBinds validGenericInstanceType validGenericMethodType

    Read the article

  • Python or matplotlib limitation error

    - by Werner
    Hi, I wrote an algorithm using python and matplotlib that generates histograms from some text input data. When the number of data input is approx. greater than 15000, I get in the (append) line of my code: mydata = [] for i in range(len(data)): mydata.append(string.atof(data[i])) the error: Traceback (most recent call last): File "get_histogram_picture.py", line 25, in <module> mydata.append(string.atof(data[i])) File "/usr/lib/python2.6/string.py", line 388, in atof return _float(s) ValueError: invalid literal for float(): -a can it be an error in python ? What is the solution ? Thanks

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • Context is Everything

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Context is Everything How many times have you have you asked a question only to hear an answer like “Well, it depends. What exactly are you trying to do?”.  There are times that raw information can’t tell us what we need to know without putting it in a larger context. Let's take a real world example.  If I'm a maintenance planner trying to figure out which assets should be replaced during my next maintenance window, I'm going to go to my Asset Management System.  I can get it to spit out a list of assets that have failed several times over the last year.  But what are these assets connected to?  Is there any safety consequences to shutting off this pipeline to do the work?  Is some other work that's planned going to conflict with replacing this asset?  Several of these questions can't be answered by simply spitting out a list of asset IDs.  The maintenance planner will have to reference a diagram of the plant to answer several of these questions. This is precisely the idea behind Augmented Business Visualization. An Augmented Business Visualization (ABV) solution is one where your structured data (enterprise application data) and your unstructured data (documents, contracts, floor plans, designs, etc.) come together to allow you to make better decisions.  Essentially we're showing your business data into its context. AutoVue allows you to create ABV solutions by integrating your enterprise application with AutoVue’s hotspot framework. Hotspots can be defined for your document. Users can click these hotspots to trigger actions in your enterprise app. Similarly, the enterprise app can highlight the hotspots in your document based on its business data, creating a visual dashboard of your business data in the context of your document. ABV is not new. We introduced the hotspot framework in AutoVue 20.1 with text hotspots. Any text in a PDF or 2D CAD drawing could be turned into a hotspot. In 20.2 we have enhanced this to include 2 new types of hotspots: 3D and regional hotspots. 3D hotspots allow you to turn 3D parts into hotspots. Hotspots can be defined based on the attributes of the part, so you can create hotspots based on part numbers, material, date of delivery, etc.  Regional hotspots allow an administrator to define rectangular regions on any PDF, image, or 2D CAD drawing. This is perfect for cases where the document you’re using either doesn’t have text in it (a JPG or TIFF for example) or if you want to define hotspots that don’t correspond to the text in the document. There are lots of possible uses for AutoVue hotspots.  A great demonstration of how our hotspot capabilities can help add context to enterprise data in the Energy sector can be found in the following AutoVue movies: Maintenance Planning in the Energy Sector - Watch it Now Capital Construction Project Management in the Energy Sector  -  Watch it Now Commissioning and Handover Process for the Energy Sector  -  Watch it Now

    Read the article

  • How to handle different roles with different previliges in PHP?

    - by user261002
    I would like to know how we can create different "user Roles" for different users in PHP. example: Administrator can create all types of users, add, view, manipulate data, delete managers, viewers, and workers, etc managers can only create, workers and viewers, can add and view data, workers can't create new users, but can only add data and view data, viewers can only view data that has been added to the DB by workers, managers and administrators. I though its better to use different sessions like : $_SESSION['admin'] $_SESSION['manager'] $_SESSION['worker'] $_SESSION['viewvers'] and for every page check which of them have a true or yes value, but I want to know how do they do it in real and big projects??? is there any other way???

    Read the article

  • ASP.Net MVC 2 / EF 4 Reference Issue

    - by Eric J.
    My ASP.Net MVC 2 project references a Domain project where POCO business objects are defined and a Data project where EF 4 POCO persistence is implemented. Things were running well until I had a little fussiness with my version control provider (rollback to previous version left me with merge conflicts). Now, upon launching the MVC 2 project, I get a runtime error: The type 'System.Data.Objects.DataClasses.IEntityWithKey' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. However, every project references System.Data.Entity (same version). If I remove the reference to System.Data.Entity from the MVC 2 project, I get the same message as a compile-time error. I'm pretty sure something got messed up when I had the version control issue, but really not sure where to look for this one.

    Read the article

  • Windows debugging - WinDbg

    - by Santhosh77
    Hi, I got the following error while debuggging a process with its core dump. 0:000 !lmi test.exe Loaded Module Info: [test.exe] Module: test Base Address: 00400000 Image Name: test.exe Machine Type: 332 (I386) Time Stamp: 4a3a38ec Thu Jun 18 07:54:04 2009 Size: 27000 CheckSum: 54c30 Characteristics: 10f Debug Data Dirs: Type Size VA Pointer MISC 110, 0, 21000 [Debug data not mapped] FPO 50, 0, 21110 [Debug data not mapped] CODEVIEW 31820, 0, 21160 [Debug data not mapped] - Can't validate symbols, if present. Image Type: FILE - Image read successfully from debugger. test.exe Symbol Type: CV - Symbols loaded successfully from image path. Load Report: cv symbols & lines Does any body know what the error "CODEVIEW 31820, 0, 21160 [Debug data not mapped] - Can't validate symbols, if present." really mean? Is this error meant that i can't read public/private symbols from the executable? If it is not so, why does the WinDbg debugger throws this typr of error? Thanks in advance, Santhosh.

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • Problem calling stored procedure with a fixed length binary parameter using Entity Framework

    - by Dave
    I have a problem calling stored procedures with a fixed length binary parameter using Entity Framework. The stored procedure ends up being called with 8000 bytes of data no matter what size byte array I use to call the function import. To give some example, this is the code I am using. byte[] cookie = new byte[32]; byte[] data = new byte[2]; entities.Insert("param1", "param2", cookie, data); The parameters are nvarchar(50), nvarchar(50), binary(32), varbinary(2000) When I run the code through SQL profiler, I get this result. exec [dbo].[Insert] @param1=N'param1',@param2=N'param2',@cookie=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 [SNIP because of 16000 zeros] ,@data=0x0000 All parameters went through ok other than the binary(32) cookie. The varbinary(2000) seemed to work fine and the correct length was maintained. Is there a way to prevent the extra data being sent to SQL server? This seems like a big waste of network resource.

    Read the article

  • RSA implementations for Java, alternative to BC

    - by Tom Brito
    The RSA implementation that ships with Bouncy Castle only allows the encrypting of a single block of data. The RSA algorithm is not suited to streaming data and should not be used that way. In a situation like this you should encrypt the data using a randomly generated key and a symmetric cipher, after that you should encrypt the randomly generated key using RSA, and then send the encrypted data and the encrypted random key to the other end where they can reverse the process (ie. decrypt the random key using their RSA private key and then decrypt the data). I can't use the workarond of using symmetric key. So, are there other implementations of RSA than Bouncy Castle?

    Read the article

  • LINQ - Linq to Sql - Specified cast is not valid - Please Help!

    - by thiag0
    I am trying to do the following... Request request = ( from r in db.Requests where r.Status == "Processing" && r.Locked == false select r).SingleOrDefault(); It is throwing the following exception... Message: Specified cast is not valid. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at GDRequestProcessor.Worker.GetNextRequest() The .DBML file schema matches my database table that I am trying to select from so I have no clue why I am having this problem. Can anyone help me? Thanks in advance!

    Read the article

  • Flash crash on long JSON load time

    - by MooCow
    I have a Flex program that gets a JSON array from a PHP script. The PHP script doesn't contain just a simple JSON array but it grabs data from Activecollab and do some work on the data before encoding the data. The first test involve a small JSON array that took a short time to encode by PHP. However, when I try to scale up the test, the Flash movie will crash trying to load the JSON data from PHP. There's no code difference between the tests, just the amount of data and amount of time it takes PHP to encode. Am I looking at a memory problem or a time out problem?

    Read the article

  • How do I ignore the UTF-8 Byte Order Marker in String comparisons?

    - by Skrud
    I'm having a problem comparing strings in a Unit Test in C# 4.0 using Visual Studio 2010. This same test case works properly in Visual Studio 2008 (with C# 3.5). Here's the relevant code snippet: byte[] rawData = GetData(); string data = Encoding.UTF8.GetString(rawData); Assert.AreEqual("Constant", data, false, CultureInfo.InvariantCulture); While debugging this test, the data string appears to the naked eye to contain exactly the same string as the literal. When I called data.ToCharArray(), I noticed that the first byte of the string data is the value 65279 which is the UTF-8 Byte Order Marker. What I don't understand is why Encoding.UTF8.GetString() keeps this byte around. How do I get Encoding.UTF8.GetString() to not put the Byte Order Marker in the resulting string?

    Read the article

  • C++ converting binary(P5) image to ascii(P2) image (.pgm)

    - by tubby
    I am writing a simple program to convert grayscale binary (P5) to grayscale ascii (P2) but am having trouble reading in the binary and converting it to int. #include <iostream> #include <fstream> #include <sstream> using namespace::std; int usage(char* arg) { // exit program cout << arg << ": Error" << endl; return -1; } int main(int argc, char* argv[]) { int rows, cols, size, greylevels; string filetype; // open stream in binary mode ifstream istr(argv[1], ios::in | ios::binary); if(istr.fail()) return usage(argv[1]); // parse header istr >> filetype >> rows >> cols >> greylevels; size = rows * cols; // check data cout << "filetype: " << filetype << endl; cout << "rows: " << rows << endl; cout << "cols: " << cols << endl; cout << "greylevels: " << greylevels << endl; cout << "size: " << size << endl; // parse data values int* data = new int[size]; int fail_tracker = 0; // find which pixel failing on for(int* ptr = data; ptr < data+size; ptr++) { char t_ch; // read in binary char istr.read(&t_ch, sizeof(char)); // convert to integer int t_data = static_cast<int>(t_ch); // check if legal pixel if(t_data < 0 || t_data > greylevels) { cout << "Failed on pixel: " << fail_tracker << endl; cout << "Pixel value: " << t_data << endl; return usage(argv[1]); } // if passes add value to data array *ptr = t_data; fail_tracker++; } // close the stream istr.close(); // write a new P2 binary ascii image ofstream ostr("greyscale_ascii_version.pgm"); // write header ostr << "P2 " << rows << cols << greylevels << endl; // write data int line_ctr = 0; for(int* ptr = data; ptr < data+size; ptr++) { // print pixel value ostr << *ptr << " "; // endl every ~20 pixels for some readability if(++line_ctr % 20 == 0) ostr << endl; } ostr.close(); // clean up delete [] data; return 0; } sample image - Pulled this from an old post. Removed the comment within the image file as I am not worried about this functionality now. When compiled with g++ I get output: $> ./a.out a.pgm filetype: P5 rows: 1024 cols: 768 greylevels: 255 size: 786432 Failed on pixel: 1 Pixel value: -110 a.pgm: Error The image is a little duck and there's no way the pixel value can be -110...where am I going wrong? Thanks.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >