Search Results

Search found 592 results on 24 pages for 'rolling stones'.

Page 19/24 | < Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • asp.net Membership : Extending Role membership?

    - by mark smith
    Hi there, I am been taking a look at asp.net membership and it seems to provide everything that i need but i need some kind of custom Role functionality. Currently i can add user to a role, great. But i also need to be able to add Permissions to Roles.. i.e. Role: Editor Permissions: Can View Editor Menu, Can Write to Editors Table, Can Delete Entries in Editors Table. Currently it doesn't support this, The idea behind this is to create a admin option in my program to create a role and then assign permissions to a role to say "allow the user to view a certain part of the application", "allow the user to open a menu item" Any ideas how i would implement soemthing like this? I presume a custom ROLE provider but i was wondering if some kind of framework extension existed already without rolling my own? Or anybody knows a good tutorial of how to tackle this issue? I am quite happy with what asp.net SQL provider has created in terms of tables etc... but i think i need to extend this by adding another table called RolesPermissions and then I presume :-) adding some kind of enumeration into the table for each valid permission?? THanks in advance

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • Query eror handling in CodeIgniter

    - by Sajith S Narayanan
    Hi All, I am trying to execute an MySql query using the CI Active methods. If the query is malformed, then CI invokes internal server error 500 and quits without processing the next steps.. I need to roll back all the other queries processed before that error statement, and the roll back is also not happening.. can you help pls. The code snippets is as below: function dbInsertInformationToDB($data_array) { $returnID = ""; $uniqueDataArray = array(); // I prepare a array of values here $uniqueTableList = filter_unique_tables($data_array[2]); $this->db->trans_begin(); // inserting is done here // when there is a query error in $this->db->insert().. it is not rolling back the previous query executed foreach($uniqueTableList as $table_name) { $uniqueDataArray = filterDataArray($data_array,$table_name,2); $this->db->insert($table_name,$uniqueDataArray); if ($this->db->_error_message()) { $error = "I am caught!!"; } $returnID = $this->db->affected_rows(); } if ($this->db->trans_status() === FALSE) { $this->db->trans_rollback(); } else { $this->db->trans_commit(); } return "ERROR"; }

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Debug NAudio MP3 reading difference?

    - by Conrad Albrecht
    My code using NAudio to read one particular MP3 gets different results than several other commercial apps. Specifically: My NAudio-based code finds ~1.4 sec of silence at the beginning of this MP3 before "audible audio" (a drum pickup) starts, whereas other apps (Windows Media Player, RealPlayer, WavePad) show ~2.5 sec of silence before that same drum pickup. The particular MP3 is "Like A Rolling Stone" downloaded from Amazon.com. Tested several other MP3s and none show any similar difference between my code and other apps. Most MP3s don't start with such a long silence so I suspect that's the source of the difference. Debugging problems: I can't actually find a way to even prove that the other apps are right and NAudio/me is wrong, i.e. to compare block-by-block my code's results to a "known good reference implementation"; therefore I can't even precisely define the "error" I need to debug. Since my code reads thousands of samples during those 1.4 sec with no obvious errors, I can't think how to narrow down where/when in the input stream to look for a bug. The heart of the NAudio code is a P/Invoke call to acmStreamConvert(), which is a Windows "black box" call which I can't think how to error-check. Can anyone think of any tricks/techniques to debug this?

    Read the article

  • jquery change image on mouse rollover

    - by Craig Rinde
    I have been having problems with this. I think this should be pretty simple but I cannot seem to get it to work. I want a new image to appear when rolling over my facebook button. Thanks for your help! <p align="right"> <a href="http://www.facebook.com/AerialistPress" ><img height="30px" id="facebook" class="changePad" alt="Aerialist Press Facebook Page" src="/sites/aerialist.localhost/files/images/facebook300.jpg" /></a> <a href="http://twitter.com/#!/AerialistPress" > <img height="30px" class="changePad" alt="Aerialist Press Twitter Page" src="/sites/aerialist.localhost/files/images/twitter300.jpg" /></a> <a href="http://www.pinterest.com/aerialistpress" ><img height="30px" class="changePad" alt="Aerialist Press Pinterest Page" src="/sites/aerialist.localhost/files/images/pinterest300.jpg" /></a> </p> <script> jQuery(document).ready(function(){ jQuery('#facebook').mouseover(function() { jQuery('#facebook').attr('src').replace('/sites/aerialist.localhost/files/images/facebook-roll.jpg'); }) }); </script>

    Read the article

  • Search and replace hundreds of strings in tens of thousands of files?

    - by C Johnson
    I am looking into changing the file name of hundreds of files in a (C/C++) project that I work on. The problem is our software has tens of thousands of files that including (i.e. #include) these hundreds of files that will get changed. This looks like a maintenance nightmare. If I do this I will be stuck in Ultra-Edit for weeks, rolling hundreds of regex's by hand like so: ^\#include.*["<\\/]stupid_name.*$ with #include <dir/new_name.h> Such drudgery would be worse than peeling hundreds of potatoes in a sunken submarine in the antarctic with a spoon. I think it would rather be ideal to put the inputs and outputs into a table like so: stupid_name.h <-> <dir/new_name.h> stupid_nameb.h <-> <dir/new_nameb.h> stupid_namec.h <-> <dir/new_namec.h> and feed this into a regular expression engine / tool / app / etc... My Ultimate Question: Is there a tool that will do that? Bonus Question: Is it multi-threaded? I looked at quite a few search and replace topics here on this website, and found lots of standard queries that asked a variant of the following question: standard question: Replace one term in N files. as opposed to: my question: Replace N terms in N files. Thanks in advance for any replies.

    Read the article

  • How to create a log file in the folder which will be created at run time

    - by swati
    Hello Everyone, I new to apache logger.I am using apache log4j for my application. I am using the following configuration file configure the root logger log4j.rootLogger=INFO, STDOUT, DAILY configure the console appender log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender log4j.appender.STDOUT.Target=System.out log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout log4j.appender.STDOUT.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%p] %c:%L - %m%n configure the daily rolling file appender log4j.appender.DAILY=org.apache.log4j.DailyRollingFileAppender log4j.appender.DAILY.File=log4jtest.log log4j.appender.DAILY.DatePattern='.'yyyy-MM-dd-HH-mm log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout log4j.appender.DAILY.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%p] %c:%L - %m%n So when my application runs it creates a folder called somename_2010-04-09-23-09 . My log file has to be created inside of this somename_2010-04-09-23-09 folder.(Which created at run time..). Is there anyway to do that.. Is there anyway we can specify in the configuration file so that it will create at run time the log file inside of the folder somename_2010-04-09-23-03 folder..? I would really appreciate if some one can answer to my questions. Thanks, Swati

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • call addsubview again causes slowdown

    - by Tom
    hi guys, i am writing a little music-game for the iphone. I am almost done, this is the only issue which keeps me from rolling it out. any help to solve this is much appreciated. this is what i do: at my appDelegate I add my menu-view-screen to the window. the menu-view-screen acts as a container and controls which view gets presented to the user. means, on the menu-view-screen I got 4 buttons (new game, options, faq, highscore). when the user clicks on a button something as this happens: if (self.gameViewController == nil) { GameViewController *viewController = [[GameViewController alloc] initWithNibName:@"GameViewController" bundle:nil]; self.gameViewController = viewController; [viewController release]; } [self.view addSubview:self.gameViewController.view]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleSwitchViewNotificationFromGameView:) name:@"SwitchView" object:gameViewController]; when the user returns to the menu, this piece of code gets executed: [[NSNotificationCenter defaultCenter] removeObserver:self]; [self.gameViewController viewWillDisappear:YES]; [self.gameViewController.view removeFromSuperview]; this works fine for all screens but not for the gamescreen(well this is the only one with heaps of user-interaction) means the responsiveness of the iphone(when playing tones) gets really slow. The performance is fine when I display the gameview for the first time. it starts getting slower as soon as I add it to the menu-views-container-subviews again (addsubview) (basically open up a new game) any ideas what causes(or to get around) this? thanks heaps Best regards Tom

    Read the article

  • Stubbing an ActsAs Rails Plugin

    - by Rabbott
    I need to create a plugin much like Authlogic (or even just add on to Authlogic), but due to requirements beyond my control I need my plugin to authenticate using SOAP. Basically the plugin would require that anyone accessing the controller (before_filter would be fine) would have to authenticate first. I have ZERO control over the login page, or the SOAP server, I am simply a client attempting to authenticate to the providers SOAP Web Service. Here is what happens.. before_filter realizes that no session[:credential] is set, and forwards the user to the url on the providers servers. The user enters their credentials, and once authenticated, the web service forwards the user to a URL that has been entered by their sysadmins, attaching a token to the url on its way back. I need to take that token, append it to some parameters stored in a local YAML file, and make the SOAP call to the providers server. If all goes as planned, I need to set session[:credential] to the result of the SOAP call, and forward the user to the root page. Subsequent calls to the before_filter will not make the SOAP call, because session[:credential] is set. Ideally I think this would be awesome to slap on top of Authlogic, but I'm not sure how to do this, So I started to create my own acts_as_soap_authentic plugin, which isn't causing errors, but doesn't do anything.. Anyone have any pointers, or tips as to how I can get the ball rolling here? It seems simple, but is proving not to be..

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Choosing a method for a webservice

    - by Wrikken
    I'm asked to set up a new webservice which should be easily usable in whatever language (php, .NET, Java, etc.) possible. Of course rolling my own can be done, accepting different content-types (xml / x-www-form-urlencoded (normal post) / json / etc.), but an existing method or mechanism would of course be prefered, cutting down time spent on development for the consumers of the service. The webservice does accept modifications / sets (it is not only simply data retrieval), but those will most likely be quite a lot less then gets (we estimate about 2.5% sets, 97.5 gets). The term webservice here indicates the protocol should go over HTTP, not being able to implement it totally client sided (javascript in the end-users browser etc.), as it needs specific user authentication. Both gets and sets are pretty light on the parameter count (usually 1 to 4). Methods like REST (which I'd prefer for only gets), XML-RPC & SOAP (might be a bit overkill, but has the advantage of explicitly defined methods and returns) are the usual suspects. What in your opinion / experience is the most widely 'spoken' and most easily implementable protocol in different languages (seen from the consumers' viewpoint) which could fullfill this need?

    Read the article

  • php paging class

    - by Stick it to THE MAN
    Can anyone recommend a good PHP paging class? I have searched google, but have not seen anything that matches my requirements. Rather than "rolling my own" (and almost surely reinventing the wheel), I decided to check in here first. First some background: I am developing a website using Symfony 1.3.2 with Propel ORM on Ubuntu 9.10. I am currently using the Propel pager, which is OK, but I recently started using memcache to speed things up a little. At this point, the Propel pager is of little use, as it (AFAIK), only works with Propel objects. What I need is a class th:t meets the following requirents Has clean interface, with separation of concerns, so that the logic to retrieve records from the datasource (e.g. database) is encapsulated in a class (or at least a separate file). Can work with arrays of objects Provides pagination links, and only fetches the data required for the current page. Also, the pagination should 'split' the available page links if there are too many. For example, if there are potentially 1000 possible page links, the pages displayed should be something like FIRST 2,3 ....999 LAST Can return the number of all the records in the table being queried, so that the following links are available FIRST, LAST (this requirement is actually already covered in the previous requirement - but I just wanted to re-emphasise it). Can anyone recommend such a library, if they have used it succesfully in the past? Alternatively, someobe may have 'hacked' (e.g. derived from) the current Propel pager, to get it to do the things I listed about - please let me know.

    Read the article

  • Capturing Drupal7 DOM content before page load for comparison

    - by ehime
    We have an MU (Multisite) installation of Drupal7 here at work, and are trying to temporarily hold back the swarm of bots we receive until we get a chance to load our content. I wrote a quick and and dirty script to send 503 headers if we find a certain criteria in Xpath (This can ALSO be done as a strpos/preg_match if DOM is not formed). In order to get the ball rolling though I need to figure out how to either A) Hijack the Drupal7 bootstrap and pull all content through this filter below B) ob_flush content through the filter before content is loaded The issue that I am having is figuring out exactly where I can catch the content at? I thought that index.php in Drupal7 would be the suspect, but I'm a little confused as to where or how I should capture the contents. Here's the script, and hopefully someone can point me in the right direction. //error_reporting(-1); /* start query */ $dom = new DOMDocument; $dom->preserveWhiteSpace = false; $dom->Load($_SERVER['PHP_SELF']); $xpath = new DOMXPath($dom); //if this exists we aren't ready to be read by bots $query = $xpath->query(".//*[@id='block-views-about-this-site-block']/div/div/div"); //or $query = 'klat-badge'; //if this is a string not DOM /* end query */ if(strpos($query) !== false) { //require banlist require('botlist.php'); $str = strtolower('/'.implode('|', array_unique($list)).'/i'); if(preg_match($str, strtolower($_SERVER['HTTP_USER_AGENT']))) { //so tell bots we're broken header('HTTP/1.1 503 Service Temporarily Unavailable'); header('Status: 503 Service Temporarily Unavailable'); exit; } }

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • jQuery Hover Panes Flickering on child

    - by Dirge2000
    OK. Here's my basic HTML structure: <ul class="tabNavigation"> <li> <a href="#">Main Button</a> <div class="hoverTab"> <a href="#">Link Within Div</a> </div> </li> </ul> And here's my jQuery command: $('ul.tabNavigation li').mouseenter(function() { $('ul.tabNavigation div.hoverTab').hide(); $(this).children('div.hoverTab').stop(false, true).fadeIn('fast'); }); $('ul.tabNavigation li').mouseleave(function() { $('ul.tabNavigation div.hoverTab').hide(); $(this).children('div.hoverTab').stop(false, true).show().fadeOut('fast'); }); When you mouseenter/mouseleave the LI, the child div is supposed to appear/disappear, but the problem is the A tag within the hoverTab div causes the tab to flicker - as if, by rolling over the link, the mouse has left the LI... Any suggestions?

    Read the article

  • MS Access (Jet) transactions, workspaces & scope

    - by Eric G
    I am having trouble with committing a transaction (using Access 2003 DAO). It's acting as if I never had called BeginTrans -- I get error 3034 on CommitTrans, "You tried to commit or rollback a transaction without first beginning a transaction"; and the changes are written to the database (presumably because they were never wrapped in a transaction). However, BeginTrans is run, if you step through it. I am running it within the Access environment using the DBEngine(0) workspace. The tables I'm updating are all opened via a Jet database connection (to the same database) and updated using DAO.Recordset.update. The connection is opened before starting BeforeTrans. I'm not doing anything weird in the middle of the transaction like closing/opening connections or multiple workspaces etc. There is one nested transaction level (basically it's wrapping multiple transacted updates in an outer transaction, so if any fail they all fail). The inner transactions run without errors, it's the outer transaction that doesn't work. Here are a few things I've looked into and ruled out: The transaction is spread across several methods and BeginTrans and CommitTrans (and Rollback) are all in different places. But when I tried a simple test of running a transaction this way, it doesn't seem like this should matter. I thought maybe the database connection gets closed when it goes out of local scope, even though I have another 'global' reference to it (I'm never sure what DAO does with dbase connections to be honest). But this seems not to be the case -- right before the commit, the connection and its recordsets are alive (I can check their properties, EOF = False, etc.) My CommitTrans and Rollback are done within event callbacks. (Very basically, a parser program is throwing an 'onLoad' or 'onLoadFail' event at the end of parsing, which I am handling by either committing or rolling back the inserts I made during processing.) However, again, trying a simple test, it doesn't seem like this should matter. Any ideas why this isn't working for me? Thanks.

    Read the article

  • How do I perform this MutliArray setup in Java?

    - by Andy Barlow
    I come from a PHP background and I'm just getting my teeth into some Java. I was wondering how I could implement the following in Java as simply as possible, just echoing the results to a terminal via the usual "System.out.print()" method. <?php $Results[0]['title'] = "No Country for Old Men"; $Results[0]['run_time'] = "122 mins"; $Results[0]['cert'] = "15"; $Results[1]['title'] = "Old School"; $Results[1]['run_time'] = "88 mins"; $Results[1]['cert'] = "18"; // Will basically show the above in order. foreach($Results as value) { echo $Results[$value]['title']; echo $Results[$value]['run_time']; echo $Results[$value]['cert']; } // Lets add some more as I need to do this in Java too $Results[2]['title'] = "Saving Private Ryan"; $Results[2]['run_time'] = "153 mins"; $Results[2]['cert'] = "15"; // Lets remove the first one as an example of another need $Results[0] = null; ?> I hear there are "list iterators" or something that are really good for rolling through data like this. Perhaps it could be implemented with that? A fully working .java file would be most handy in this instance, including how to add and remove items from the array like the above. P.S. I do plan on using this for an Android App in the distant future, so, hopefully it should all work on Android fine too, although, I imagine this sort of thing works on anything Java related :).

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Why can't I roll a loop in Javascript?

    - by Carl Manaster
    I am working on a web page that uses dojo and has a number (6 in my test case, but variable in general) of project widgets on it. I'm invoking dojo.addOnLoad(init), and in my init() function I have these lines: dojo.connect(dijit.byId("project" + 0).InputNode, "onChange", function() {makeMatch(0);}); dojo.connect(dijit.byId("project" + 1).InputNode, "onChange", function() {makeMatch(1);}); dojo.connect(dijit.byId("project" + 2).InputNode, "onChange", function() {makeMatch(2);}); dojo.connect(dijit.byId("project" + 3).InputNode, "onChange", function() {makeMatch(3);}); dojo.connect(dijit.byId("project" + 4).InputNode, "onChange", function() {makeMatch(4);}); dojo.connect(dijit.byId("project" + 5).InputNode, "onChange", function() {makeMatch(5);}); and change events for my project widgets properly invoke the makeMatch function. But if I replace them with a loop: for (var i = 0; i < 6; i++) dojo.connect(dijit.byId("project" + i).InputNode, "onChange", function() {makeMatch(i);}); same makeMatch() function, same init() invocation, same everything else - just rolling my calls up into a loop - the makeMatch function is never called; the objects are not wired. What's going on, and how do I fix it? I've tried using dojo.query, but its behavior is the same as the for loop case.

    Read the article

  • MySQL get point in time totals from related tables

    - by batfastad
    Hi everyone We have an order book and invoicing system and I've been tasked with trying to output monthly rolling totals from these tables. But I don't know really where to start with this. I think there's some SQL syntax that I don't even know about yet. I'm familiar with INNER/LEFT/JOINS and GROUP BY etc but grouping by date is confusing since I don't know how to limit the data to only the current date that's being grouped by at that point. I think this will involve joining the tables to themselves or possibly a sub-select. I always thought it best to avoid sub-selects apart from when absolutely necessary. Basically the system has 3 tables orders: order_id, currency, order_stamp orders_lines: order_line_id, invoice_id, order_id, price invoices: invoice_id, invoice_stamp order_stamp and invoice_stamp are UTC unix timestamps stored as integers, not MySQL timestamps. I'm trying to get a listing by year/month showing the total of current unbilled orders (sum of price), at that point in time. Current orders are ones where order_stamp is less than or equal to 00:00 on the 1st of the month. Unbilled orders are ones where invoice_stamp is null or invoice_stamp is greater than 00:00 on the 1st of the month. At that point in time there may not be a related invoice yet and invoice_id might be null. Anyone got any suggestions on what I should join to what and what I need to group by? Cheers, B

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >