Search Results

Search found 3749 results on 150 pages for 'updating'.

Page 127/150 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • importing symbols from python package into caller's namespace

    - by Paul C
    I have a little internal DSL written in a single Python file that has grown to a point where I would like to split the contents across a number of different directories + files. The new directory structure currently looks like this: dsl/ __init__.py types/ __init__.py type1.py type2.py and each type file contains a class (e.g. Type1). My problem is that I would like to keep the implementation of code that uses this DSL as simple as possible, something like: import dsl x = Type1() ... This means that all of the important symbols should be available directly in the user's namespace. I have tried updating the top-level __init__.py file to import the relevant symbols: from types.type1 import Type1 from types.type2 import Type2 ... print globals() the output shows that the symbols are imported correctly, but they still aren't present in the caller's code (the code that's doing the import dsl). I think that the problem is that the symbols are actually being imported to the 'dsl' namespace. How can I change this so that the classes are also directly available in the caller's namespace?

    Read the article

  • Ways to implement tags - pros and cons of each

    - by bobobobo
    Related Using SO as an example, what is the most sensible way to manage tags if you anticipate they will change often? Way 1: Seriously denormalized (comma delimited) table posts +--------+-----------------+ | postId | tags | +--------+-----------------+ | 1 | c++,search,code | Here tags are comma delimited. Pros: Tags are retrieved at once with a single select query. Updating tags is simple. Easy and cheap to update. Cons: Extra parsing on tag retrieval, difficult to count how many posts use which tags. (alternatively, if limited to something like 5 tags) table posts +--------+-------+-------+-------+-------+-------+ | postId | tag_1 | tag_2 | tag_3 | tag_4 | tag_5 | +--------+-------+-------+-------+-------+-------+ | 1 | c++ |search | code | | | Way 2: "Slightly normalized" (separate table, no intersection) table posts +--------+-------------------+ | postId | title | +--------+-------------------+ | 1 | How do u tag? | table taggings +--------+---------+ | postId | tagName | +--------+---------+ | 1 | C++ | | 1 | search | Pros: Easy to see tag counts (count(*) from taggings where tagName='C++'). Cons: tagName will likely be repeated many, many times. Way 3: The cool kid's (normalized with intersection table) table posts +--------+---------------------------------------+ | postId | title | +--------+---------------------------------------+ | 1 | Why is a raven like a writing desk? | table tags +--------+---------+ | tagId | tagName | +--------+---------+ | 1 | C++ | | 2 | search | | 3 | foofle | table taggings +--------+---------+ | postId | tagId | +--------+---------+ | 1 | 1 | | 1 | 2 | | 1 | 3 | Pros: No repeating tag names. More girls will like you. Cons: More expensive to change tags than way #1.

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • Consequent attribute calculations with a queuing system

    - by vrinek
    For all of the following assume these: rails v3.0 ruby v1.9 resque We have 3 models: Product belongs_to :sku, belongs_to :category Sku has_many :products, belongs_to :category Category has_many :products, has_many :skus When we update the product (let's say we disable it) we need to have some things happen to the relevant sku and category. The same is true for when a sku is updated. The proper way of achieving this is have an after_save on each model that triggers the other models' update events. example: products.each(&:disable!) # after_save triggers self.sku.products_updated # and self.category.products_updated (self is product) Now if we have 5000 products we are in for a treat. The same category might get updated hundreds of times and hog the database while doing so. We also have a nice queueing system, so the more realisting way of updating products would be products.each(&:queue_disable!) which would simply toss 5000 new tasks to the working queue. The problem of 5000 category updates still exists though. Is there a way to avoid all those updates on the db? How can we concatenate all the category.products_updated for each category in the queue?

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • unexpected behaviour of object stored in web service Session

    - by draconis
    Hi. I'm using Session variables inside a web service to maintain state between successive method calls by an external application called QBWC. I set this up by decorating my web service methods with this attribute: [WebMethod(EnableSession = true)] I'm using the Session variable to store an instance of a custom object called QueueManager. The QueueManager has a property called ChangeQueue which looks like this: [Serializable] public class QueueManager { ... public Queue<QBChange> ChangeQueue { get; set; } ... where QBChange is a custom business object belonging to my web service. Now, every time I get a call to a method in my web service, I use this code to retrieve my QueueManager object and access my queue: QueueManager qm = (QueueManager)Session[ticket]; then I remove an object from the queue, using qm.dequeue() and then I save the modified query manager object (modified because it contains one less object in the queue) back to the Session variable, like so: Session[ticket] = qm; ready for the next web service method call using the same ticket. Now here's the thing: if I comment out this last line //Session[ticket] = qm; , then the web service behaves exactly the same way, reducing the size of the queue between method calls. Now why is that? The web service seems to be updating a class contained in serialized form in a Session variable without being asked to. Why would it do that? When I deserialize my Queuemanager object, does the qm variable hold a reference to the serialized object inside the Session[ticket] variable?? This seems very unlikely.

    Read the article

  • Displaying performance metrics in a modern web app?

    - by Charles
    We're updating our ancient internal PHP application at work. Right now, we gather extensive performance measurements on every pageview, and log them to the database. Additionally, users requested that some of the metrics be displayed at the bottom of the page. This worked out pretty well for us, because the last thing that the application does on every request is include the file containing the HTML footer. The updated parts of the application use an MVC framework and a Dispatch/Request/Response loop. The page footer is no longer the last thing done. In fact, it could very well be the first thing done, before the rest of the page is created. Because we can grab the Response before it's returned to the user, we could try to include placeholders for the performance metrics in the footer and simply replace them with the actual numbers, but this strikes me as a bad idea somehow. How do you handle this in your modern web app? While we're using PHP, I'm curious how it's done in a Ruby/Rails app, and in your favorite Python framework.

    Read the article

  • Software usage analytics in C#

    - by TiernanO
    I have a project i am working on currently and would like to implement some sort of software tracking in the code. ideally, stuff like how often its launched. how long it runs for, feature tracking, etc. I already use Exceptioneer for unhandled exceptions, but would like something similar for usage tracking. this data should all be anonymous and ideally run as a service by someone else. and i would like to give the users the option to turn it off, if they so wish to... So, is this something i should implement myself, or are there third parties out there that do this sort of things? i know it might be a sticky area, but i have seen stats about iPhone app usage. they do it, so why cant we? (if the user agrees, of course) [Update] Based on the comments, i should have been more clear. this is a Winforms .NET 4. application, though i am thinking of updating it later with WCF. i would only be tracking my own application, though i would also want to know minor information about environment (Windows OS Version, SP, maybe proc and ram...)

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • My treeview Data is Not changing.

    - by Vibin Jith
    Hai , I am trying to display the user permission in a treeview.For each user permissions are stored in the database as xml file. In the page, a Combo box used to list the users and a treeView used to bind the Permission xml. When the user get selected in the combo box , i took the xml from the database and connect with xmlDatasource and bind with the treeView. What happening is , First time the TreeView fill with the xml nodes and another time it will not work. For the first selection it's ok. Anothers selections are not effected by the treeview. The code is debugging. No problem. Can you just tell ,why the treeview datasource is not updating. I used this code .. Dim permissionRoot = From permissionNode In MyUser.UserPermissionXml.Root.Elements("menuNode") XmlTreeViewSource.Data = permissionRoot(0).ToString trvPermission.DataSource = XmlTreeViewSource trvPermission.DataBind() SetPermission(trvPermission.Nodes(0)) The markups <asp:TreeView ID="trvPermission" runat="server" ExpandDepth="2" ShowCheckBoxes="All" ShowLines="True" ForeColor="#005782" > <DataBindings> <asp:TreeNodeBinding DataMember="menuNode" TextField="title" ValueField="value" /> </DataBindings> </asp:TreeView>

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • Modifying DOM of a webpage

    - by Prashant Singh
    The structure of a webpage is like this :- <div id='abc'> <div class='a'>Some contents here </div> <div class='b'>Some other contents< </div> </div> My aim is to add this after the class a in above structure. <div class='a'>Some other contents here </div> So that final structure looks like this :- <div id='abc'> <div class='a'>Some contents here </div> <div class='a'>Some other contents here </div> <div class='b'>Some other contents< </div> </div> Can there be a better way to do this using DOM properties. I was thinking of naive way of parsing the content and updating. Please comment if I am unclear in asking my doubt !

    Read the article

  • Auto Submitting a form (cURL)

    - by cast01
    Im writing a form to post data off to paypal, and this works fine, i create the form with hidden fields and then have a submit button that submits everything to paypal. However, when the user clicks that button, there is more that i want to do, for example change their cart status in the database. So, i want to be able to execute some code when they click submit and then post the data to paypal. I dont want to use javascript for this. My method at the moment is using cURL, the form posts back to the current page, i then check for $_POST data, do my other commands like updating the status of the cart, and then create a curl command, and post the form data to paypal. Now, it gets posted successfully, but the browser doesnt go off to paypal. At first i was just retirieving the result in a string and then echoing it out and i could see that the post was successful, then i set CURLOPT_FOLLOWLOCATION to 1 assuming this would let the browser go off to paypal but it doesnt, what it seems to do is grab some stuff from paypal and put it on my page. Is using cURL the right thing for this? and if so, how do i get round this problem? I want to stay away from javascript for this as only users with javascript enabled would have their cart updated. The code im using for curl is below, the post_data is an array i created and then read key-value pairs into post_string. //Set cURL Options curl_setopt($curl_connection, CURLOPT_CONNECTTIMEOUT, 30); curl_setopt($curl_connection, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"); curl_setopt($curl_connection, CURLOPT_RETURNTRANSFER, false); curl_setopt($curl_connection, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($curl_connection, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($curl_connection, CURLOPT_MAXREDIRS, 20); //Set Data to be Posted curl_setopt($curl_connection, CURLOPT_POSTFIELDS, $post_string); //Execute Request curl_exec($curl_connection);

    Read the article

  • Quickly or concisely determine the longest string per column in a row-based data collection

    - by ccornet
    Judging from the failure of my last inquiry, I need to calculate and preset the widths of a set of columns in a table that is being made into an Excel file. Unfortunately, the string data is stored in a row-based format, but the widths must be calculated in a column-based format. The data for the spreadsheets are generated from the following two collections: var dictFiles = l.Items.Cast<SPListItem>().GroupBy(foo => foo.GetSafeSPValue("Category")).ToDictionary(bar => bar.Key); StringDictionary dictCols = GetColumnsForItem(l.Title); Where l is an SPList whose title determines which columns are used. Each SPListItem corresponds to a row of data, which are sorted into separate worksheets based on Category (hence the dictionary). The second line is just a simple StringDictionary that has the column name (A, B, C, etc.) as a key and the corresponding SPListItme field display name as the corresponding value. So for each Category, I enumerate through dictFiles[somekey] to get all the rows in that sheet, and get the particular cell data using SPListItem.Fields[dictCols[colName]]. What I am asking is, is there a quick or concise method, for any one dictFiles[somekey], to retrieve a readout of the longest string in each column provided by dictCols? If it is impossible to get both quickness and conciseness, I can settle for either (since I always have the O(n*m) route of just enumerating the collection and updating an array whenever strCurrent.Length strLongest.Length). For example, if the goal table was the following... Item# Field1 Field2 Field3 1 Oarfish Atmosphere Pretty 2 Raven Radiation Adorable 3 Sunflower Flowers Cute I'd like a function which could cleanly take the collection of items 1, 2, and 3 and output in the correct order... Sunflower, Atmosphere, Adorable Using .NET 3.5 and C# 3.0.

    Read the article

  • What are the constraints on Cocoa Framework version numbers?

    - by Joe
    We're distributing a Cocoa framework with regular updates. We'll be updating the version numbers with each release. The Apple documentation appears to suggest that version numbers should be consecutive incrementing integers. We are distributing output in several formats, and the framework is only one of them. We would rather not have to maintain a separate numbering system just for our frameworks. We don't really care about the precise format of the version numbers of the framework, so long as they change whenever the product changes, and behave in a correct and expected manner. I'm looking for a sensible and correct way of avoiding having to run a separate version number counter. One suggestion is that for product version 12.34.56 we could simply remove the dots and say the framework version is 123456. Is there a constraint on the type of number that can be represented (uint? long?) Does it have to be a number? Could it be a string? Do the numbers have to be consecutive? Is there a standard way of doing things in this situation?

    Read the article

  • C#: Populating a UI using separate threads.

    - by Andrew
    I'm trying to make some sense out of an application Ive been handed in order to track down the source of an error. Theres a bit of code (simplified here) which creates four threads which in turn populate list views on the main form. Each method gets data from the database and retrieves graphics from a resource dll in order to directly populate an imagelist and listview. From what Ive read on here (link) updating UI elements from any thread other than the UI thread should not be done, and yet this appears to work? Thread t0 = new Thread(new ThreadStart(PopulateListView1)); t0.IsBackground = true; t0.Start(); Thread t1 = new Thread(new ThreadStart(PopulateListView2)); t1.Start(); Thread t2 = new Thread(new ThreadStart(PopulateListView3)); t2.Start(); Thread t3 = new Thread(new ThreadStart(PopulateListView4)); t3.Start(); The error itself is a System.InvalidOperationException "Image cannot be added to the ImageList." which has me wondering if the above code is linked in some way. Iis this method of populating the UI recommended and if not what are the possible complications resulting from it?

    Read the article

  • Spork doesnt reload code

    - by there-is-no-spoon
    I am using following gems and ruby-1.9.3-p194: rails 3.2.3 rspec-rails 2.9.0 spork 1.0.0rc2 guard-spork 0.6.1 Full list of used gems is available in this Gemfile.lock or Gemfile. And I am using this configuration files: Guardfile .rspec spec_helper.rb factories.rb If I modify any model (or custom validator in app/validators etc) reloading code doesnt works. I mean when I run specs (hit Enter on guard console) Spork contain "old code" and I got obsolete error messages. But when I manually restart Guard and Spork (CTRC-C CTRL-d guard) everything works fine. But it is getting tired after few times. Questions: Can somebody look at my config files please and fix error which block updating code. Or maybe this is an issue of newest Rails version? PS This problem repeats and repeats over some projects (and on some NOT). But I haven't figured out yet why this is happens. PS2 Perhaps this problem is something to do with ActiveAdmin? When I change file in app/admin code is reloaded.

    Read the article

  • Can I write a .NETCF Partial Class to extend System.Windows.Forms.UserControl?

    - by eidylon
    Okay... I'm writing a .NET CF (VBNET 2008 3.5 SP1) application, which has one master form, and it dynamically loads specific UserControls based on menu click, in a sort of framework idea. There are certain methods and properties these controls all need to work within the app. Right now I am doing this as an Interface, but this is aggravating as all get up, because some of the methods are optional, and yet I MUST implement them by the nature of interfaces. I would prefer to use inheritance, so that I can have certain code be inherited with overridability, but if I write a class which inherits System.Windows.Forms.UserControl and then inherit my control from that, it squiggles, and tells me that UserControls MUST inherit directly from System.Windows.Forms.UserControl. (Talk about a design flaw!) So next I thought, well, let me use a partial class to extend System.Windows.Forms.UserControl, but when I do that, even though it all seems to compile fine, none of my new properties/methods show up on my controls. Is there any way I can use partial classes to 'extend' System.Windows.Forms.UserControl? For example, can anyone give me a code sample of a partial class which simply adds a MyCount As Integer readonly property to the System.Windows.Forms.UserControl class? If I can just see how to get this going, I can take it from there and add the rest of my functionality. Thanks in advance! I've been searching google, but can't find anything that seems to work for UserControl extension on .NET CF. And the Interface method is driving me crazy as even a small change means updating ALL the controls whether they need to 'override' the method or not.

    Read the article

  • My project is no longer used - how should I feel?

    - by flybywire
    For the last two years I have been developing and supporting an important project for a big customer. The project included mining data from the customer's existing systems, processing, and displaying and updating in the customer's public home page. The project was defined as crucial by the customer and I was payed good money and flown at the customer's expense to meet key employees. Some months ago, when the project was finished and in maintainance mode, I informed the customer that I am no longer interested in doing it as I had a new opportunity that would not be compatible with my existing customer. I was payed to train one of their employees, flown to meet him, make sure everything works and that he can be safely left in charge of the project. We finished in good terms after I complied with all my obligations and they payed me all they owed me. Some days ago, just out of curiosity, I entered to their website to see how the data continues to be updated and much to my dismay I discovered that the day after my contract was finished my system was "turned off" and it ceased to feed data to the public website. Let's put it clear, there is no issue of money or broken contract here. They are in they full right to do whatever they want with my software. But it is an issue of broken "programmer's ego". Should I feel bad about it (I do). Should I care and check out with my customer if they need some help? Or is it none of my matters?

    Read the article

  • If a table has two xml columns, will inserting records be a lot slower?

    - by Lieven Cardoen
    Is it a bad thing to have two xml columns in one table? + How much slower are these xml columns in terms of updating/inserting/reading data? In profiler this kind of insert normally takes 0 ms, but sometimes it goes up to 160ms: declare @p8 xml set @p8=convert(xml,N'<interactions><interaction correct="false" score="0" id="0" gapid="0" x="61" y="225"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="1" gapid="1" x="64" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="2" gapid="2" x="131" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction></interactions>') declare @p14 xml set @p14=convert(xml,N'<contentinteractions/>') exec sp_executesql N'INSERT INTO [dbo].[PackageSessionNodes]([dbo].[PackageSessionNodes].[PackageSessionId], [dbo].[PackageSessionNodes].[TreeNodeId],[dbo].[PackageSessionNodes].[Duration], [dbo].[PackageSessionNodes].[Score],[dbo].[PackageSessionNodes].[ScoreMax], [dbo].[PackageSessionNodes].[Interactions],[dbo].[PackageSessionNodes].[BrainTeaser], [dbo].[PackageSessionNodes].[DateCreated], [dbo].[PackageSessionNodes].[CompletionStatus], [dbo].[PackageSessionNodes].[ReducedScore], [dbo].[PackageSessionNodes].[ReducedScoreMax], [dbo].[PackageSessionNodes].[ContentInteractions]) VALUES (@ins_dboPackageSessionNodesPackageSessionId, @ins_dboPackageSessionNodesTreeNodeId, @ins_dboPackageSessionNodesDuration, @ins_dboPackageSessionNodesScore, @ins_dboPackageSessionNodesScoreMax, @ins_dboPackageSessionNodesInteractions, @ins_dboPackageSessionNodesBrainTeaser, @ins_dboPackageSessionNodesDateCreated, @ins_dboPackageSessionNodesCompletionStatus, @ins_dboPackageSessionNodesReducedScore, @ins_dboPackageSessionNodesReducedScoreMax, @ins_dboPackageSessionNodesContentInteractions) ; SELECT SCOPE_IDENTITY() as new_id This is the table: CREATE TABLE [dbo].[PackageSessionNodes]( [PackageSessionNodeId] [int] IDENTITY(1,1) NOT NULL, [PackageSessionId] [int] NOT NULL, [TreeNodeId] [int] NOT NULL, [Duration] [int] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [Interactions] [xml] NOT NULL, [BrainTeaser] [bit] NOT NULL, [DateCreated] [datetime] NULL, [CompletionStatus] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [ContentInteractions] [xml] NOT NULL, CONSTRAINT [PK_PackageSessionNodes] PRIMARY KEY CLUSTERED ( [PackageSessionNodeId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_PackageSessions] FOREIGN KEY([PackageSessionId]) REFERENCES [dbo].[PackageSessions] ([PackageSessionId]) ON UPDATE CASCADE ON DELETE CASCADE GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_PackageSessions] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_TreeNodes] FOREIGN KEY([TreeNodeId]) REFERENCES [dbo].[TreeNodes] ([TreeNodeId]) GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_TreeNodes] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_Score] DEFAULT ((-1)) FOR [Score] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ScoreMax] DEFAULT ((-1)) FOR [ScoreMax] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_DateCreated] DEFAULT (getdate()) FOR [DateCreated] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScore] DEFAULT ((-1)) FOR [ReducedScore] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScoreMax] DEFAULT ((-1)) FOR [ReducedScoreMax] GO

    Read the article

  • Confirm bug Magento 1.4 'show/hide editor' in CMS

    - by latvian
    Hi When entering code in CMS static block(possible page as well) and in this code there is empty DIV tags such us: <a href="javascript:hide1(),show2(),hide3()"><div class="dropoff_button"></div></a> The DIV tags will be gone next time you open the block to edit. it will look as this <a href="javascript:hide1(),show2(),hide3()"> </a> without the div tags ...and saving again it modifies your code. I think it something to do with the 'show/hide editor'. By default it goes into the WYSIWYG editor, so when updating static block i don't see any other solution than 1."hide the editor' by clicking 'show/hide editor' 2.delete the old code from the editor 3. get code that doesn't miss the DIVs 4. Merge new code with code in 3 in some other editing software than magento 5. paste result in the magento editor, 6. Save Is this bug? What is your solution? Can i turn of WYSIWYG editor?

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • jquery form extension ajax

    - by Craig Wilson
    http://www.malsup.com/jquery/form/#html I have multiple forms on a single page. They all use the same class "myForm". Using the above extension I can get them to successfully process and POST to ajax-process.php <script> // wait for the DOM to be loaded $(document).ready(function() { // bind 'myForm' and provide a simple callback function $('.myForm').ajaxForm(function() { alert("Thank you for your comment!"); }); }); </script> I'm having an issue however with the response. I need to get the comment that the user submitted to be displayed in the respective div that it was submitted from. I can either set this as a hidden field in the form, or as text in the ajax-process.php file. I can't work out how to get the response from ajax-process.php into something I can work with in the script, if I run the following it appends to all the forms (obviously). The only way I can think to do it is to repeat the script using individual DIV ID's instead of a single class. However there must be a way of updating the div that the ajax-process.php returns! // prepare the form when the DOM is ready $(document).ready(function() { // bind form using ajaxForm $('.myForm').ajaxForm({ // target identifies the element(s) to update with the server response target: '.myDiv', // success identifies the function to invoke when the server response // has been received; here we apply a fade-in effect to the new content success: function() { $('.myDiv').fadeIn('slow'); } }); }); Any suggestions?!

    Read the article

  • Getting Started: Silverlight 4 Business Application

    - by Eric J.
    With the arrival of VS 2010 and Silverlight 4, I decided it's time to look into Silverlight and understand how to build a 3-Tier business application. After several hours of searching for and reading documentation and tutorials, I'm thoroughly confused (and that doesn't happen easily). Here are some specific points I don't understand. I welcome guidance on any of them, and also would appreciate any references to a really good tutorial. Brad Abrahm's What is a .NET RIA services (written for Silverlight 3) seemed very promising, until I realized I don't have System.Web.Ria.dll on my system. Am I missing an optional download? Was this rolled into another DLL for Silverlight 4? Did this go away in favor of something else in Silverlight 4? This recent blog says to start from a Silverlight Business Application, remove unwanted stuff, create a WCF RIA services Class Library project, and copy files and references from the Business Application to the WCF RIA services project, while manually updating resource references (perhaps bug in B2 compiler). Is this really the right road to go down? It seems very clumsy. My requirements are to perform very simple CRUD on straightforward business objects. I'm looking forward to suggestions on how to do that the Silverlight 4 way.

    Read the article

  • UpdatePanel reloads the whole page

    - by Emil D
    Hi.I'm building an asp.net cutom control inside which I have two dropdownlists: companyIdSelection and productFamilySelection.I populate the companyIdSelection at Page_Load and in order to populate the productFamilySelection depending on the selected item in companyIdSelection.I'm using UpdatePanels to achieve this, but for some reason every time I update companyIdSelection Page_Load is being called ( which as far as I know should happen only when the entire page is reloaded ), the list is being reloaded again and the item the user selected is lost( the selected item is always the top one ).Here's the code <asp:UpdatePanel ID="updateFamilies" runat="server" UpdateMode="Always"> <ContentTemplate> Company ID:<br> <br></br> <asp:DropDownList ID="companyIdSelection" runat="server" AutoPostBack="True" OnSelectedIndexChanged="companyIdSelection_SelectedIndexChanged"> </asp:DropDownList> <br></br> Product Family: <br></br> <asp:DropDownList ID="productFamilySelection" runat="server" AutoPostBack="True" onselectedindexchanged="productFamilySelection_SelectedIndexChanged"> </asp:DropDownList> <br> </ContentTemplate> </asp:UpdatePanel> protected void Page_Load(object sender, EventArgs e) { this.companyIdSelection.DataSource = companyIds(); //companyIds returns the object containing the initial data items this.companyIdSelection.DataBind(); } protected void companyIdSelection_SelectedIndexChanged(object sender, EventArgs e) { // Page_Load is called again for some reason before this method is called, so it // resets the companyIdSelection EngDbService s = new EngDbService(); productFamilySelection.DataSource = s.getProductFamilies(companyIdSelection.Text); productFamilySelection.DataBind(); } Also, I tried setting the UpdateMode of the UpdatePanel to "Conditional" and adding an asyncpostback trigger but the result was the same. What am I doing wrong? PS: I fixed the updating problem, by using Page.IsPostBack in the Page_Load method, but I would still want to avoid a full postback if possible

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >