Search Results

Search found 18028 results on 722 pages for 'atomic values'.

Page 596/722 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Odd 'UNION' behavior in an Oracle SQL query

    - by RenderIn
    Here's my query: SELECT my_view.* FROM my_view WHERE my_view.trial in (select 2 as trial_id from dual union select 3 from dual union select 4 from dual) and my_view.location like ('123-%') When I execute this query it returns results which do not conform to the my_view.location like ('123-%') condition. It's as if that condition is being ignored completely. I can even change it to my_view.location IS NULL and it returns the same results, despite that field being not-nullable. I know this query seems ridiculous with the selects from dual, but I've structured it this way to replicate a problem I have when I use a 'WITH' clause (the results of that query are where the selects from dual inline view are). I can modify the query like so and it returns the expected results: SELECT my_view.* FROM my_view WHERE my_view.trial in (2, 3, 4) and my_view.location like ('123-%') Unfortunately I do not know the trial values up front (they are queried for in a 'WITH' clause) so I cannot structure my query this way. What am I doing wrong? I will say that the my_view view is composed of 3 other views whose results are UNION ALL and each of which retrieve some data over a DB Link. Not that I believe that should matter, but in case it does.

    Read the article

  • Update Azure Service Configuration File using Powershell

    - by David Osborn
    I'm trying to write a powershell script that updats each of the DiagnosticsConnectionString and DataConnectionString values below, but I can't seem to find each individual Role node using $serviceconfig.ServiceConfiguration.SelectSingleNode("Role[@name='MyService_WorkerRole']") doing echo $serviceconfig.ServiceConfiguration.Role lists out both Role nodes for me so I know it is working up to that point, but after that I am not having much success. where $serviceConfig contains the below XML: <?xml version="1.0"?> <ServiceConfiguration serviceName="MyService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="MyService_WorkerRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="really long string" /> <Setting name="DataConnectionString" value="really long string 2" /> </ConfigurationSettings> </Role> <Role name="MyService_WebRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="really long string 3" /> <Setting name="DataConnectionString" value="really long string 4" /> </ConfigurationSettings> </Role> </ServiceConfiguration>

    Read the article

  • Access DB Transaction Insert limit

    - by user986363
    Is there a limit to the amount of inserts you can do within an Access transaction before you need to commit or before Access/Jet throws an error? I'm currently running the following code in hopes to determine what this maximum is. OleDbConnection cn = new OleDbConnection( @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\temp\myAccessFile.accdb;Persist Security Info=False;"); try { cn.Open(); oleCommand = new OleDbCommand("BEGIN TRANSACTION", cn); oleCommand.ExecuteNonQuery(); oleCommand.CommandText = "insert into [table1] (name) values ('1000000000001000000000000010000000000000')"; for (i = 0; i < 25000000; i++) { oleCommand.ExecuteNonQuery(); } oleCommand.CommandText = "COMMIT"; oleCommand.ExecuteNonQuery(); } catch (Exception ex) { } finally { try { oleCommand.CommandText = "COMMIT"; oleCommand.ExecuteNonQuery(); } catch{} if (cn.State != ConnectionState.Closed) { cn.Close(); } } The error I received on a production application when I reached 2,333,920 inserts in a single uncommited transaction was: "File sharing lock count exceeded. Increase MaxLocksPerFile registry entry". Disabling transactions fixed this problem.

    Read the article

  • Finding the Column Index for a Specific Value

    - by Btibert3
    Hi All, I am having a brain cramp. Below is a toy dataset: df <- data.frame( id = 1:6, v1 = c("a", "a", "c", NA, "g", "h"), v2 = c("z", "y", "a", NA, "a", "g"), stringsAsFactors=F) I have a specific value that I want to find across a set of defined columns and I want to identify the position it is located in. The fields I am searching are characters and the trick is that the value I am looking for might not exist. In addition, null strings are also present in the dataset. Assuming I knew how to do this, the variable position indicates the values I would like returned. > df id v1 v2 position 1 1 a z 1 2 2 a y 1 3 3 c a 2 4 4 <NA> <NA> 99 5 5 g a 2 6 6 h g 99 The general rule is that I want to find the position of value "a", and if it is not located or if v1 is missing, then I want 99 returned. In this instance, I am searching across v1 and v2, but in reality, I have 10 different variables. It is also worth noting that the value I am searching for can only exist once across the 10 variables. What is the best way to generate this recode? Many thanks in advance.

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • Bootstrap - Typehead on multiple inputs

    - by Clem
    I have two text intputs, both have to run an autocompletion. The site is using Bootstrap, and the « typeahead » component. I have this HTML : <input type="text" class="js_typeahead" data-role="artist" /> <input type="text" class="js_typeahead" data-role="location" /> I'm using the « data-role » attribute (that is sent to the Ajax controller as a $_POST index), in order to determine what kind of data has to be retrieved from the database. The Javascript goes this way : var myTypeahead = $('input.js_typeahead').typeahead({ source: function(query, process){ var data_role; data_role = myTypeahead.attr('data-role'); return $.post('/ajax/typeahead', { query:query,data_role:data_role },function(data){ return process(data.options); }); } }); With PHP, I check what $_POST['data-role'] contains, an run the MySQL query (in this case, a query either on a list of Artists, or a list of Locations). But the problem is the second "typeahead" returns the same values than the first one (list of Artists). I assume it's because the listener is attached to the object « myTypeahead », and this way the "data-role" attribute which is used, will always be the same. I think I could fix it by using something like : data_role = $(this).attr('data-role'); But of course this doesn't work, as it's a different scope. Maybe I'm doing it all wrong, but at least maybe you people could give me a hint. Sorry if this has already been discussed, I actually searched but without success. Thanks in advance, Clem (from France, sorry for my english)

    Read the article

  • MATLAB: impoint getPosition strange behaviour

    - by tguclu
    I have a question about the values returned by getPosition. Below is my code. It lets the user set 10 points on a given image: figure ,imshow(im); colorArray=['y','m','c','r','g','b','w','k','y','m','c']; pointArray = cell(1,10); % Construct boundary constraint function fcn = makeConstrainToRectFcn('impoint',get(gca,'XLim'),get(gca,'YLim')); for i = 1:10 p = impoint(gca); % Enforce boundary constraint function using setPositionConstraintFcn setPositionConstraintFcn(p,fcn); setColor(p,colorArray(1,i)); pointArray{i}=p; getPosition(p) end When I start to set points on the image I get results like [675.000 538.000], which means that the x part of the coordinate is 675 and the y part is 538, right? This is what the MATLAB documentation says, but since the image is 576*120 (as displayed in the window) this is not logical. It seemed to me like, somehow, getPosition returns the y coordinate first. I need some clarification on this. Thanks for help

    Read the article

  • Tree data in MySql database table

    - by Robert Koritnik
    I have a table that uses Adjacency list model for hierarchy storage. My most relevant columns in this table are therefore: ItemId // is auto_increment ParentId Level ParentTrail // in the form of "parentId/../parentId/itemId" then I created a before insert tigger, that populates columns Level and ParentTrail. Since the last column also includes current item's ID I had to use a trick in my trigger because auto_increment columns are not available in the before insert trigger. So I get that value from the information_schema.tables table. All works fine, until I try to write an update trigger, that would update my item and its descendants when the item changes its parent (ParentId has changed). But I can't make an update on my table inside the update trigger. All I can do is to change current record's values but not other's. I could use a separate table for hierarchy data, but that would mean that I would also have to create a view that would combine these two tables (1:1 relation) and I would like to avoid this is at all possible. Is there a way to have all these in the same table so that these fields (Level and ParetTrail) set/update themselves automagically using triggers?

    Read the article

  • floating point equality in Python and in general

    - by eric.frederich
    I have a piece of code that behaves differently depending on whether I go through a dictionary to get conversion factors or whether I use them directly. The following piece of code will print 1.0 == 1.0 -> False But if you replace factors[units_from] with 10.0 and factors[units_to ] with 1.0 / 2.54 it will print 1.0 == 1.0 -> True #!/usr/bin/env python base = 'cm' factors = { 'cm' : 1.0, 'mm' : 10.0, 'm' : 0.01, 'km' : 1.0e-5, 'in' : 1.0 / 2.54, 'ft' : 1.0 / 2.54 / 12.0, 'yd' : 1.0 / 2.54 / 12.0 / 3.0, 'mile' : 1.0 / 2.54 / 12.0 / 5280, 'lightyear' : 1.0 / 2.54 / 12.0 / 5280 / 5.87849981e12, } # convert 25.4 mm to inches val = 25.4 units_from = 'mm' units_to = 'in' base_value = val / factors[units_from] ret = base_value * factors[units_to ] print ret, '==', 1.0, '->', ret == 1.0 Let me first say that I am pretty sure what is going on here. I have seen it before in C, just never in Python but since Python in implemented in C we're seeing it. I know that floating point numbers will change values going from a CPU register to cache and back. I know that comparing what should be two equal variables will return false if one of them was paged out while the other stayed resident in a register. Questions What is the best way to avoid problems like this?... In Python or in general. Am I doing something completely wrong? Side Note This is obviously part of a stripped down example but what I'm trying to do is come with with classes of length, volume, etc that can compare against other objects of the same class but with different units. Rhetorical Questions If this is a potentially dangerous problem since it makes programs behave in an undetermanistic matter, should compilers warn or error when they detect that you're checking equality of floats Should compilers support an option to replace all float equality checks with a 'close enough' function? Do compilers already do this and I just can't find the information.

    Read the article

  • Reversing a circular deque without a sentinel

    - by SDLFunTimes
    Hey Stackoverflow I'm working on my homework and I'm trying to reverse a circular-linked deque without a sentinel. Here are my data structures: struct DLink { TYPE value; struct DLink * next; struct DLink * prev; }; struct cirListDeque { int size; struct DLink *back; }; Here's my approach to reversing the deque: void reverseCirListDeque(struct cirListDeque* q) { struct DLink* current; struct DLink* temp; temp = q->back->next; q->back->next = q->back->prev; q->back->prev = temp; current = q->back->next; while(current != q->back->next) { temp = current->next; current->next = current->prev; current->prev = temp; current = current->next; } } However when I run it and put values 1, 2 and 3 on it (TYPE is just a alias for int in this case) and reverse it I get 2, 3, null. Does anyone have any ideas as to what I may be doing wrong? Thanks in advance.

    Read the article

  • Setting Class-Level Variable to Use Between Event Handlers

    - by lush
    I'm having a hard time understanding why the following code doesn't work. I'm sure it's something remedial that I'm missing or not understanding. I currently have a page that asks for user input. If, based on the input and logged in user, I find data from this page already in the database, I need to update the existing records rather than creating new ones, so I set a class-level bool to true. The problem is, when MyNextButton is clicked, PreviouslySubmitted is still false. So, I'm not sure how to make the value of this variable persist. Any advice is appreciated, thanks. public partial class MyForm : System.Web.UI.Page { private bool previouslySubmitted; protected void Page_Load(object sender, EventArgs e) { MyButton.Click += (o, i) => { q = from a in db.TableA where (a.SomeField == SomeValue) select a; if(q.Any()) { PreviouslySubmitted = true; //populate the form's fields with values from database for user to revise } } MyNextButton.Click += (o, i) => { if(PreviouslySubmitted) { //update database } else { //insert into database } }

    Read the article

  • How to deserialize implementation classes in OSGi

    - by Daniel Schneller
    In an eRCP OSGi based application the user can push a button and go to a lock screen similar to that of Windows or Mac OS X. When this happens, the current state of the application is serialized to a file and control is handed over to the lock screen. In this mobile application memory is very tight, so we need to get rid of the original view/controller when the lock screen comes up. This works fine and we end up with a binary serialized file. Once the user logs back in, the file is read in again and the original state of the application restored. This works fine as well, except when the controller that was serialized contained a reference to an object which comes from a different bundle. In my concrete case the original controller (from bundle A) can call a web service and gets a result back. Nothing fancy, just some Strings and Numbers in a simple value holder class. However the controller only sees this as a Result interface; the actual runtime object (ResultImpl) is defined and created in a different bundle (bundle B, the webservice client implementation) and returned via a service call. When the deserialization now tries to thaw the controller from the file, it throws a ClassNotFound exception, complaining about not being able to deserialize the result object, because deserialization is called from bundle A, which cannot see the ResultImpl class from bundle B. Any ideas on how to work around that? The only thing I could come up with is to clone all the individual values into another object, defined in the controller's bundle, but this seems like quite a hassle.

    Read the article

  • Building Stored Procedure to group data into ranges with roughly equal results in each bucket

    - by Len
    I am trying to build one procedure to take a large amount of data and create 5 range buckets to display the data. the buckets ranges will have to be set according to the results. Here is my existing SP GO /****** Object: StoredProcedure [dbo].[sp_GetRangeCounts] Script Date: 03/28/2010 19:50:45 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_GetRangeCounts] @idMenu int AS declare @myMin decimal(19,2), @myMax decimal(19,2), @myDif decimal(19,2), @range1 decimal(19,2), @range2 decimal(19,2), @range3 decimal(19,2), @range4 decimal(19,2), @range5 decimal(19,2), @range6 decimal(19,2) SELECT @myMin=Min(modelpropvalue), @myMax=Max(modelpropvalue) FROM xmodelpropertyvalues where modelPropUnitDescriptionID=@idMenu set @myDif=(@myMax-@myMin)/5 set @range1=@myMin set @range2=@myMin+@myDif set @range3=@range2+@myDif set @range4=@range3+@myDif set @range5=@range4+@myDif set @range6=@range5+@myDif select @myMin,@myMax,@myDif,@range1,@range2,@range3,@range4,@range5,@range6 select t.range as myRange, count(*) as myCount from ( select case when modelpropvalue between @range1 and @range2 then 'range1' when modelpropvalue between @range2 and @range3 then 'range2' when modelpropvalue between @range3 and @range4 then 'range3' when modelpropvalue between @range4 and @range5 then 'range4' when modelpropvalue between @range5 and @range6 then 'range5' end as range from xmodelpropertyvalues where modelpropunitDescriptionID=@idmenu) t group by t.range order by t.range This calculates the min and max value from my table, works out the difference between the two and creates 5 buckets. The problem is that if there are a small amount of very high (or very low) values then the buckets will appear very distorted - as in these results... range1 2806 range2 296 range3 75 range5 1 Basically I want to rebuild the SP so it creates buckets with equal amounts of results in each. I have played around with some of the following approaches without quite nailing it... SELECT modelpropvalue, NTILE(5) OVER (ORDER BY modelpropvalue) FROM xmodelpropertyvalues - this creates a new column with either 1,2,3,4 or 5 in it ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range1 and @range2 ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range2 and @range3 or maybe i could allocate every record a row number then divide into ranges from this?

    Read the article

  • database structure

    - by jindalsyogesh
    I have a table named ActivityRecording. This table currently has 500,000 records. I need to add a lot of new inputs that relates to activityrecording table. The relation of activityrecording with these new input fields is 1 to 0,1. So, what's going to happen on screen is when user fills the ActivityRecording data, he will then be taken to a new page and this page will show a form based on the user's input (from a dropdown named service) in activityrecording. There will 6 different kinds of form (each form will have 7-8 inputs which includes textareas of size 5kb, textboxes and checkboxes). So, for one activityrecording user will fill one out of 6 forms. There are two ways I know (there could be more), I can design the data structure: Add all the inputs from all these 6 forms into the activityrecording table. So, columns belonging to 5 of these forms will be null in this table, only columns belonging to one of the forms will have values The other way would be add 6 new tables (one for each form) and add 6 foreign key columns to activityrecording table. So, out of 6 foreign keys, 5 will be null and one will actually point to a table Which approach is a better data structure design? Please take into consideration that number of rows in this table are 500,000 and are expected to grow at a faster rate now.

    Read the article

  • C# how to calculate hashcode from an object reference.

    - by Wayne
    Folks, here's a thorny problem for you! A part of the TickZoom system must collect instances of every type of object into a Dictionary< type. It is imperative that their equality and hash code be based on the instance of the object which means reference equality instead of value equality. The challenge is that some of the objects in the system have overridden Equals() and GetHashCode() for use as value equality and their internal values will change over time. That means that their Equals and GetHashCode are useless. How to solve this generically rather than intrusively? So far, We created a struct to wrap each object called ObjectHandle for hashing into the Dictionary. As you see below we implemented Equals() but the problem of how to calculate a hash code remains. public struct ObjectHandle : IEquatable<ObjectHandle>{ public object Object; public bool Equals(ObjectHandle other) { return object.ReferenceEquals(this.Object,other.Object); } } See? There is the method object.ReferenceEquals() which will compare reference equality without regard for any overridden Equals() implementation in the object. Now, how to calculate a matching GetHashCode() by only considering the reference without concern for any overridden GetHashCode() method? Ahh, I hope this give you an interesting puzzle. We're stuck over here. Sincerely, Wayne

    Read the article

  • Determining what action an NPC will take, when it is partially random but influenced by preferences?

    - by lala
    I want to make characters in a game perform actions that are partially random but also influenced by preferences. For instance, if a character feels angry they have a higher chance of yelling than telling a joke. So I'm thinking about how to determine which action the character will take. Here are the ideas that have come to me. Solution #1: Iterate over every possible action. For each action do a random roll, then add the preference value to that random number. The action with the highest value is the one the character takes. Solution #2: Assign a range of numbers to an action, with more likely actions having a wider range. So, if the random roll returns anywhere from 1-5, the character will tell a joke. If it returns 6-75, they will yell. And so on. Solution #3: Group all the actions and make a branching tree. Will they take a friendly action or a hostile action? The random roll (with preference values added) says hostile. Will they make a physical attack or verbal? The random roll says verbal. Keep going down the line until you reach the action. Solution #1 is the simplest, but hardly efficient. I think Solution #3 is a little more complicated, but isn't it more efficient? Does anyone have any more insight into this particular problem? Is #3 the best solution? Is there a better solution?

    Read the article

  • codegen:nullValue vs msprop:nullValue

    - by Ken
    Ok, I have datasets that I created way back in 1.1 framework to which we used codegen:nullValue within the XSD to handle null values. However if I open one of these datasets with vs 2005 (i.e. 2.0 framework) and add a column, it removes the codegen setting from the entire xsd but adds in msprop:nullValue However, unlike previous years, I noticed this time the proper property code was NOT over riden from returning the null value specified in codegen as it was doing in the past. Meaning the msprop appears to be creating the proper code behind the scenes (See example). Anyone know of any other differnces? Should I be concerned with deploying a new xsd, WITHOUT the codegen code but instead with the msprop xml? Example: Original creates _ Public Property ParentID() As Integer Get If Me.IsParentIDNull Then Return -1 Else Return CType(Me(Me.tableCompany.ParentIDColumn),Integer) End If End Get Set Me(Me.tableCompany.ParentIDColumn) = value End Set End Property New creates _ Public Property ParentID() As Integer Get If Me.IsParentIDNull Then Return -1 Else Return CType(Me(Me.tableCompany.ParentIDColumn),Integer) End If End Get Set Me(Me.tableCompany.ParentIDColumn) = value End Set End Property BUT is there anything else that might be occuring that I am NOT seeing thus MAKING me re-enter all the codegen settings? THANKS!

    Read the article

  • best practice to persist classes model

    - by Yaron Naveh
    My application contains a set of model classes. e.g. Person, Department... The user changes values for instances of these classes in the UI and the classes are persisted to my "project" file. Next time the user can open and edit the project. Next version of my product may change the model classes drastically. It will still need to open existing projects files (I will know how to handle missing data). How is it best to persist my model classes to the project file? The easiest way to persist classes is Data contract serialization. However it will fail on breaking changes (I expect to have such). How to handle this? use some other persistence, e.g. name-value collection or db which is more tolerance ship a "project converter" application to migrate old projects. This requires to either ship with both old and new models or to manipulate xml, which is best?

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • Accessing MS Access database from C#

    - by Abilash
    I want to use MS Access as database for my C# windows form application.I have used OleDb driver for connecting MS Access. I am able to select the records from the MS Access using OleDbConnection and ExecuteReader.But I am un able to insert,update and delete records. My code is as follows: OleDbConnection con=new OleDbConnection(strCon); try { con.Open(); OleDbCommand com = new OleDbCommand("INSERT INTO DPMaster(DPID,DPName,ClientID,ClientName) VALUES('53','we','41','aw')", con); int a=com.ExecuteNonQuery(); //OleDbCommand com = new OleDbCommand("SELECT * FROM DPMaster", con); //OleDbDataReader dr = com.ExecuteReader(); //while (dr.Read()) //{ // MessageBox.Show(dr[2].ToString()); //} MessageBox.Show(a.ToString()); } catch { MessageBox.Show("cannot"); } If I execute the commented block the application works.But the insert block doesnt works.Why I am unable to insert/update/delete the records into database? My Connection String is as follows: string strCon="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=xyz.mdb;Persist Security Info=True";

    Read the article

  • How do I mock a method with an open array parameter in PascalMock?

    - by Oliver Giesen
    I'm currently in the process of getting started with unit testing and mocking for good and I stumbled over the following method that I can't seem to fabricate a working mock implementation for: function GetInstance(const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID = CID_DEFAULT): Boolean; (TImplContextID is just an alias for Integer) I thought it would have to look something like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; begin Result := AddCall('GetInstance') .WithParams([@AIID, AContextID]) .ReturnsOutParams([AInstance]) .ReturnValue; end; But the compiler complains about the .ReturnsOutParams([AInstance]) saying "Bad argument type in variable type array constructor.". Also I haven't found a way to specify the open array parameter AArgs at all. Also, is using the @-notation for the TGUID-typed parameter the right way to go? Is it possible to mock this method with the current version of PascalMock at all? Update: I now realize I got the purpose of ReturnsOutParams completely wrong: It's intended to be used for populating the values to be returned when defining the expectations rather than for mocking the call itself. I now think the correct syntax for mocking the out parameter would probably have to look more like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; var lCall: TMockMethod; begin lCall := AddCall('GetInstance').WithParams([@AIID, AContextID]); Pointer(AInstance) := lCall.OutParams[0]; Result := lCall.ReturnValue; end; The questions that remain are how to mock the open array parameter AArgs and whether passing the TGUID argument (i.e. a value type) by address will work out...

    Read the article

  • ASP.NET: ModalPopupExtender prevents button click event from firing

    - by C. Griffin
    Here is what I'm trying to do: Click a button on my page, which in turn makes (2) things happen: Display a ModalPopup to prevent the user from pressing any buttons or changing values Call my code behind method, hiding the ModalPopup when finished Here is the ASP markup: <asp:UpdatePanel ID="UpdatePanel2" runat="server" ChildrenAsTriggers="true" UpdateMode="Always"> <Triggers> <asp:AsyncPostBackTrigger ControlID="btnSaveData" EventName="Click" /> </Triggers> <ContentTemplate> <asp:Panel ID="pnlHidden" runat="server" style="display: none;"> <div> <h1>Saving...</h1> </div> </asp:Panel> <cc1:ModalPopupExtender ID="modalPopup" BackgroundCssClass="modalBackground" runat="server" TargetControlID="btnSaveData" PopupControlID="pnlHidden"> </cc1:ModalPopupExtender> <asp:Button ID="btnSaveData" runat="server" Text="Save Data" OnClick="btnSaveData_Click" /> </ContentTemplate> </asp:UpdatePanel> Now, here is my code behind C# code: protected void btnSaveData_Click(object sender, EventArgs e) { UpdateUserData(GetLoggedInUser()); modalPopup.Enabled = false; } Why doesn't this work? The ModalPopup displays perfectly, but the btnSaveData_Click event NEVER fires.

    Read the article

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • [grails] setting cookies when render type is "contentType: text/json"

    - by Robin Jamieson
    Is it possible to set cookies on response when the return render type is set as json? I can set cookies on the response object when returning with a standard render type and later on, I'm able to get it back on the subsequent request. However, if I were to set the cookies while rendering the return values as json, I can't seem to get back the cookie on the next request object. What's happening here? These two actions work as expected with 'basicForm' performing a regular form post to the action, 'withRegularSubmit', when the user clicks submit. // first action set the cookie and second action yields the originally set cookie def regularAction = { // using cookie plugin response.setCookie("username-regular", "regularCookieUser123",604800); return render(view: "basicForm"); } // called by form post def withRegularSubmit = { def myCookie = request.getCookie("username-regular"); // returns the value 'regularCookieUser123' return render(view: "resultView"); } When I switch to setting the cookie just before returning from the response with json, I don't get the cookie back with the post. The request starts by getting an html document that contains a form and when doc load event is fired, the following request is invoked via javascript with jQuery like this: var someUrl = "http://localhost/jsonAction"; $.get(someUrl, function(jsonData) { // do some work with javascript} The controller work: // this action is called initially and returns an html doc with a form. def loadJsonForm = { return render(view: "jsonForm"); } // called via javascript when the document load event is fired def jsonAction = { response.setCookie("username-json", "jsonCookieUser456",604800); // using cookie plugin return render(contentType:'text/json') { 'pair'('myKey': "someValue") }; } // called by form post def withJsonSubmit = { def myCookie = request.getCookie("username-json"); // got null value, expecting: jsonCookieUser456 return render(view: "resultView"); } The data is returned to the server as a result of the user pressing the 'submit' button and not through a script. Prior to the submit of both 'withRegularSubmit' and 'withJsonSubmit', I see the cookies stored in the browser (Firefox) so I know they reached the client.

    Read the article

  • Trouble with an depreciated constructor visual basic visual studio 2010

    - by VBPRIML
    My goal is to print labels with barcodes and a date stamp from an entry to a zebra TLP 2844 when the user clicks the ok button/hits enter. i found what i think might be the code for this from zebras site and have been integrating it into my program but part of it is depreciated and i cant quite figure out how to update it. below is what i have so far. The printer is attached via USB and the program will also store the entered numbers in a database but i have that part done. any help would be greatly Appreciated.   Public Class ScanForm      Inherits System.Windows.Forms.Form    Public Const GENERIC_WRITE = &H40000000    Public Const OPEN_EXISTING = 3    Public Const FILE_SHARE_WRITE = &H2      Dim LPTPORT As String    Dim hPort As Integer      Public Declare Function CreateFile Lib "kernel32" Alias "CreateFileA" (ByVal lpFileName As String,                                                                           ByVal dwDesiredAccess As Integer,                                                                           ByVal dwShareMode As Integer, <MarshalAs(UnmanagedType.Struct)> ByRef lpSecurityAttributes As SECURITY_ATTRIBUTES,                                                                           ByVal dwCreationDisposition As Integer, ByVal dwFlagsAndAttributes As Integer,                                                                           ByVal hTemplateFile As Integer) As Integer          Public Declare Function CloseHandle Lib "kernel32" Alias "CloseHandle" (ByVal hObject As Integer) As Integer      Dim retval As Integer           <StructLayout(LayoutKind.Sequential)> Public Structure SECURITY_ATTRIBUTES          Private nLength As Integer        Private lpSecurityDescriptor As Integer        Private bInheritHandle As Integer      End Structure            Private Sub OKButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OKButton.Click          Dim TrNum        Dim TrDate        Dim SA As SECURITY_ATTRIBUTES        Dim outFile As FileStream, hPortP As IntPtr          LPTPORT = "USB001"        TrNum = Me.ScannedBarcodeText.Text()        TrDate = Now()          hPort = CreateFile(LPTPORT, GENERIC_WRITE, FILE_SHARE_WRITE, SA, OPEN_EXISTING, 0, 0)          hPortP = New IntPtr(hPort) 'convert Integer to IntPtr          outFile = New FileStream(hPortP, FileAccess.Write) 'Create FileStream using Handle        Dim fileWriter As New StreamWriter(outFile)          fileWriter.WriteLine(" ")        fileWriter.WriteLine("N")        fileWriter.Write("A50,50,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrNum) 'prints the tracking number variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.Write("A50,100,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrDate) 'prints the date variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.WriteLine("P1")        fileWriter.Flush()        fileWriter.Close()        outFile.Close()        retval = CloseHandle(hPort)          'Add entry to database        Using connection As New SqlClient.SqlConnection("Data Source=MNGD-LABS-APP02;Initial Catalog=ScannedDB;Integrated Security=True;Pooling=False;Encrypt=False"), _        cmd As New SqlClient.SqlCommand("INSERT INTO [ScannedDBTable] (TrackingNumber, Date) VALUES (@TrackingNumber, @Date)", connection)            cmd.Parameters.Add("@TrackingNumber", SqlDbType.VarChar, 50).Value = TrNum            cmd.Parameters.Add("@Date", SqlDbType.DateTime, 8).Value = TrDate            connection.Open()            cmd.ExecuteNonQuery()            connection.Close()        End Using          'Prepare data for next entry        ScannedBarcodeText.Clear()        Me.ScannedBarcodeText.Focus()      End Sub

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >