Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 294/380 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • What is best practice as far as using perl-isms (idiomatic expressions) in Perl?

    - by DVK
    A couple of years back I participated in writing the best practices/coding style for our (fairly large and often Perl-using) company. It was done by a committee of "senior" Perl developers. As anything done by consensus, it had parts which everyone disagreed with. Duh. The part that rubbed wrong the most was a strong recommendation to NOT use many Perlisms (loosely defined as code idioms not present in, say C++ or Java), such as "Avoid using '... unless X;' constructs". The main rationale posited for such rules as this one was that non-Perl developers would have much harder time with the Perl code base otherwise. The assumption here I guess is that Perl code jockeys are rarer breed overall - and among new hires to the company - than non-Perlers. I was wondering whether SO has any good arguments to support or reject this logic... it is mostly academic curiosity at this point as the company's Perl coding standard is ossified and will never be revised again as far as I'm aware. P.S. Just to be clear, the question is in the context I noted - the answer for an all-Perl smaller development shop is obviously a resounding "use Perl to its maximum capability".

    Read the article

  • jQuery autocomplete - update options on dynamically created elements

    - by Ian Robinson
    Background The user enters values into multiple inputs on the page. Think of entering a list of widgets for sale. The user would select the type of a widget from a drop down. Then enter the name of the Widget vendor, and then the actual name of the widget. These UI elements would all appear as one row. The user is able to click an "add another widget" link and a new row is created. The two input elements have autocomplete functionality attached to them using the DevBridge jQuery plugin. It suggests the name of the vendor and the name of the widget. I have recently wired up the logic to filter the name of the widget based on the name of the vendor you entered. This means setting the (custom) vendorName parameter when attaching the autocomplete plugin so it can be passed to the server side to retrieve the widget names for that vendor that match the name the user entered. To achieve this functionality I'm dynamically attaching the autocomplete functionality to the second element (the widget name text box) when it receives focus. Here is an example: $('.autocomplete-widget').live('focus', function () { $(this).autocomplete({ serviceUrl: '/something.ashx', params: { type: 'widget', vendorName: $(this).parent().prev().find('.autocomplete-vendor').val()} }); }); Problem At first glance, this looks to function quite well. However I started noticing some inconsistencies, and upon inspection, I found out that the call to the server is actually happening several times in some cases. Each time the widget name input element receives focus the autocomplete functionality is attached. And what I'm seeing is that the call to the server is made in proportion to how many times the input element has gained focus. I'd like to avoid making the call more than once, but retain the ability to attach (or update) the autocomplete functionality "on focus". I've tried to use the autocomplete 'setOptions' method but have only been able to get it to work with the static elements, not the dynamically created elements. Question Is there a way I can clear out the autocomplete functionality before I attach another one?

    Read the article

  • How do I solve column width problems in a SSRS Tablix?

    - by David Stein
    I'm creating a simple report from Microsoft Dynamics CRM. When I pull in the following dataset: SELECT FQD.productidname , FQD.NEW_PRICEBREAKS , FQD.NEW_WEEKSARO , ltrim(rtrim(FP.NEW_PRODUCTNAME)) AS NewProductDesc , FQD.productdescription , FQD.quoteid , FQD.quantity , FQD.productiddsc , FQD.baseamount , FQD.lineitemnumber , FQD.priceperunit , FQD.extendedamount , ISNULL(FP.productnumber, '') AS productnumber , ISNULL(FQD.uomidname, '-') AS Unit , FQD.tax AS Tax , FQD.volumediscountamount * FQD.quantity AS Discount , FQD.manualdiscountamount AS MDiscount , FQD.quotedetailid , FQD.crm_moneyformatstring , FQD.NEW_PRICEPERUNIT , FQD.NEW_PRICEPERUNIT_BASE FROM FilteredQuoteDetail FQD LEFT OUTER JOIN FilteredProduct FP ON FQD.productid = FP.productid WHERE (FQD.quoteid = @CRM_QuoteId) The NewProductDesc field is too wide. If I shorted it in design view, it still comes out too wide in the presentation. I think the field is coming out that wide because the database field probably has a bunch of blank spaces at the end of every description. I could not find a way to force that field in the Tablix not to grow horizontally, so I attempted to remedy it in the dataset by replacing the NewProductDesc line with: ltrim(rtrim(FP.NEW_PRODUCTNAME)) AS NewProductDesc However, that has no effect either. Can anyone suggest why this behavior is occuring? Can anyone tell me how I can force the field not to grow horizontally?

    Read the article

  • How to resolve java.nio.charset.UnmappableCharacterException in Scala 2.8.0?

    - by Roman Kagan
    I'm using Scala 2.8.0 and trying to read pipe delimited file like in code snipped below: object Main { def main(args: Array[String]) :Unit = { if (args.length > 0) { val lines = scala.io.Source.fromPath("QUICK!LRU-2009-11-15.psv") for (line <-lines) print(line) } } } Here's the error: Exception in thread "main" java.nio.charset.UnmappableCharacterException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:261) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:319) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.read(BufferedReader.java:157) at scala.io.BufferedSource$$anonfun$1$$anonfun$apply$1.apply(BufferedSource.scala:29) at scala.io.BufferedSource$$anonfun$1$$anonfun$apply$1.apply(BufferedSource.scala:29) at scala.io.Codec.wrap(Codec.scala:65) at scala.io.BufferedSource$$anonfun$1.apply(BufferedSource.scala:29) at scala.io.BufferedSource$$anonfun$1.apply(BufferedSource.scala:29) at scala.collection.Iterator$$anon$14.next(Iterator.scala:149) at scala.collection.Iterator$$anon$2.next(Iterator.scala:745) at scala.collection.Iterator$$anon$2.head(Iterator.scala:732) at scala.collection.Iterator$$anon$24.hasNext(Iterator.scala:405) at scala.collection.Iterator$$anon$20.hasNext(Iterator.scala:320) at scala.io.Source.hasNext(Source.scala:209) at scala.collection.Iterator$class.foreach(Iterator.scala:534) at scala.io.Source.foreach(Source.scala:143) ... at infillreports.Main$.main(Main.scala:8) at infillreports.Main.main(Main.scala) Java Result: 1

    Read the article

  • Using RabbitMQ (Java client), is there a way to determine if network connection is closed during con

    - by MItch Branting
    I'm using RabbitMQ on RHEL 5.3 using the Java client. I have 2 nodes (machines). Node1 is consuming messages from a queue on Node2 using the Java helper class QueueingConsumer. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(); ... Process message - delivery.getBody() } If the interface is brought down on Node1 or Node2 (e.g. ifconfig eth1 down), the client (above) never knows the network isn't there anymore. Does RabbitMQ provide some type of configuration on the Java client that can be used to determine if the connection has gone away. Shutting down the RabbitMQ server on Node2 will trigger a ShutdownSignalException, which can be caught and the app can go into a reconnect loop. But bringing down the interface doesn't cause any type of exception to happen, so the code will be waiting forever on consumer.nextDelivery(). I've also tried using the timeout version of this call. e.g. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); int timeout_ms = 30000; while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(timeout_ms); if (delivery == null) { if (channel.isOpen() == false) // Seems to always return true { throw new ShutdownSignalException(); } } else { ... Process message - delivery.getBody() } } but appears that this always returns true (even though the interface is down). I assume registering for the ShutdownListener on the connection will yield the same results, but haven't tried that yet. Is there a way to configure some sort of heartbeat, or do you just have to write custom lease logic (e.g. "I'm here now") in order to get this to work?

    Read the article

  • C# StreamReader.ReadLine() - Need to pick up line terminators

    - by Tony Trozzo
    I wrote a C# program to read an Excel .xls/.xlsx file and output to CSV and Unicode text. I wrote a separate program to remove blank records. This is accomplished by reading each line with StreamReader.ReadLine(), and then going character by character through the string and not writing the line to output if it contains all commas (for the CSV) or all tabs (for the Unicode text). The problem occurs when the Excel file contains embedded newlines (\x0A) inside the cells. I changed my XLS to CSV converter to find these new lines (since it goes cell by cell) and write them as \x0A, and normal lines just use StreamWriter.WriteLine(). The problem occurs in the separate program to remove blank records. When I read in with StreamReader.ReadLine(), by definition it only returns the string with the line, not the terminator. Since the embedded newlines show up as two separate lines, I can't tell which is a full record and which is an embedded newline for when I write them to the final file. I'm not even sure I can read in the \x0A because everything on the input registers as '\n'. I could go character by character, but this destroys my logic to remove blank lines. Any ideas would be greatly appreciated.

    Read the article

  • "The specified view is invalid" in call to LimitedWebPartManager.AddWebPart in SharePoint 2010

    - by Lee Richardson
    This code used to work in WSS 3.0 / MOSS 2007 in FeatureReceiver.FeatureActivated: using (SPLimitedWebPartManager limitedWebPartManager = Site.GetLimitedWebPartManager("default.aspx", PersonalizationScope.Shared)) { ListViewWebPart listViewWebPart = new ListViewWebPart { Title = title, ListName = list.ID.ToString("B").ToUpper(), ViewGuid = view.ID.ToString("B").ToUpper() }; limitedWebPartManager.AddWebPart(listViewWebPart, zone, position); } I'm trying to convert to SharePoint 2010 and it now fails with: System.ArgumentException: The specified view is invalid. at Microsoft.SharePoint.SPViewCollection.get_Item(Guid guid) at Microsoft.SharePoint.WebPartPages.ListViewWebPart.EnsureListAndView(Boolean requireFullBlownViewSchema) at Microsoft.SharePoint.WebPartPages.ListViewWebPart.get_AppropriateBaseViewId() at Microsoft.SharePoint.WebPartPages.SPWebPartManager.AddWebPartInternal(SPSupersetWebPart superset, Boolean throwIfLocked) at Microsoft.SharePoint.WebPartPages.SPLimitedWebPartManager.AddWebPartInternal(WebPart webPart, String zoneId, Int32 zoneIndex, Boolean throwIfLocked) at Microsoft.SharePoint.WebPartPages.SPLimitedWebPartManager.AddWebPart(WebPart webPart, String zoneId, Int32 zoneIndex) Interestingly enough when I run it from a unit test it works, it only fails in FeatureActivated. When I debug with Reflector it is failing on this line: this.view = this.list.LightweightViews[new Guid(this.ViewGuid)]; list.LightweightViews only returns one view, the default view, even though list.Views returns all of them. I have no idea what LightweightViews is supposed to mean and I'm running out of ideas. Anyone else got any?

    Read the article

  • How to play non buffered .wav with MediaStreamSource implementation in Silverlight 4?

    - by kyrisu
    Background I'm trying to stream a wave file in Silverlight 4 using MediaStreamSource implementation found here. The problem is I want to play the file while it's still buffering, or at least give user some visual feedback while it's buffering. For now my code looks like that: private void button1_Click(object sender, RoutedEventArgs e) { HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(App.Current.Host.Source, "../test.wav")); //request.ContentType = "audio/x-wav"; request.AllowReadStreamBuffering = false; request.BeginGetResponse(new AsyncCallback(RequestCallback), request); } private void RequestCallback(IAsyncResult ar) { this.Dispatcher.BeginInvoke(delegate() { HttpWebRequest request = (HttpWebRequest)ar.AsyncState; HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(ar); WaveMediaStreamSource wavMss = new WaveMediaStreamSource(response.GetResponseStream()); try { me.SetSource(wavMss); } catch (InvalidOperationException) { // This file is not valid } me.Play(); }); } The problem is that after settings request.AllowREadStreamBuffer to false the stream does not support seeking and the above mentioned implementation throws an exception (keep in mind I've put some of the position setting logic into if(stream.CanSeek) block): Read is not supported on the main thread when buffering is disabled Question Is there a way to play wav stream without buffering it in advance?

    Read the article

  • In Asp.Net MVC 2 is there a better way to return 401 status codes without getting an auth redirect

    - by Greg Roberts
    I have a portion of my site that has a lightweight xml/json REST API. Most of my site is behind forms auth but only some of my API actions require authentication. I have a custom AuthorizeAttribute for my API that I use to check for certain permissions and when it fails it results in a 401. All is good, except since I'm using forms auth, Asp.net conveniently converts that into a 302 redirect to my login page. I've seen some previous questions that seem a bit hackish to either return a 403 instead or to put some logic in the global.asax protected void Application_EndRequest() that will essentially convert 302 to 401 where it meets whatever criteria. Previous Question Previous Question 2 What I'm doing now is sort of like one of the questions, but instead of checking the Application_EndRequest() for a 302 I make my authorize attribute return 666 which indicates to me that I need to set this to a 401. Here is my code: protected void Application_EndRequest() { if (Context.Response.StatusCode == MyAuthAttribute.AUTHORIZATION_FAILED_STATUS) { //check for 666 - status code of hidden 401 Context.Response.StatusCode = 401; } } Even though this works, my question is there something in Asp.net MVC 2 that would prevent me from having to do this? Or, in general is there a better way? I would think this would come up a lot for anyone doing REST api's or just people that do ajax requests in their controllers. The last thing you want is to do a request and get the content of a login page instead of json.

    Read the article

  • Linq to SQL and concurrency with Rob Conery repository pattern

    - by David Hall
    I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data. This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings). The basic pattern is: private MyDataContext _datacontext private Table _tasks; public Repository(MyDataContext datacontext) { _dataContext = datacontext; } public void GetTasks() { _tasks = from t in _dataContext.Tasks; return from t in _tasks select new Domain.Task { Name = t.Name, Id = t.TaskId, Description = t.Description }; } public void SaveTask(Domain.Task task) { Task dbTask = null; // Logic for new tasks omitted... dbTask = (from t in _tasks where t.TaskId == task.Id select t).SingleOrDefault(); dbTask.Description = task.Description, dbTask.Name = task.Name, _dataContext.SubmitChanges(); } So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task. I then update the tasks from this stored Table and save what I've updated This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want. However, it just screams to me that I've missed a trick. Is there a better way of doing this? I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing. I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.

    Read the article

  • Synchronizing Asynchronous request handlers in Silverlight environment

    - by Eric Lifka
    For our senior design project my group is making a Silverlight application that utilizes graph theory concepts and stores the data in a database on the back end. We have a situation where we add a link between two nodes in the graph and upon doing so we run analysis to re-categorize our clusters of nodes. The problem is that this re-categorization is quite complex and involves multiple queries and updates to the database so if multiple instances of it run at once it quickly garbles data and breaks (by trying to re-insert already used primary keys). Essentially it's not thread safe, and we're trying to make it safe, and that's where we're failing and need help :). The create link function looks like this: private Semaphore dblock = new Semaphore(1, 1); // This function is on our service reference and gets called // by the client code. public int addNeed(int nodeOne, int nodeTwo) { dblock.WaitOne(); submitNewNeed(createNewNeed(nodeOne, nodeTwo)); verifyClusters(nodeOne, nodeTwo); dblock.Release(); return 0; } private void verifyClusters(int nodeOne, int nodeTwo) { // Run analysis of nodeOne and nodeTwo in graph } All copies of addNeed should wait for the first one that comes in to finish before another can execute. But instead they all seem to be running and conflicting with each other in the verifyClusters method. One solution would be to force our front end calls to be made synchronously. And in fact, when we do that everything works fine, so the code logic isn't broken. But when it's launched our application will be deployed within a business setting and used by internal IT staff (or at least that's the plan) so we'll have the same problem. We can't force all clients to submit data at different times, so we really need to get it synchronized on the back end. Thanks for any help you can give, I'd be glad to supply any additional information that you could need!

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • Installable ISAM not found

    - by lucky
    I have a requirement in which i upload excel sheets to sql server database. The business logic is executed and display as reports in php. It is working fine till yesterday. Today i tried to upload excel files. It is throwing an error message stating:- Translated version of it by me:- The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" has not found "installable ISAM." . Return This is the original message in german:-- [Microsoft][ODBC SQL Server Driver][SQL Server] OLE DB-Anbieter "Microsoft.Jet.OLEDB.4.0" für den Verbindungsserver "(null)" hat die Meldung "Installierbares ISAM nicht gefunden." zurückgeben. Query that i used in the stored procedure:- EXEC('SELECT * INTO temp FROM OPENROWSET(''Microsoft.Jet.OLEDB.4.0'', ''Excel 8.0;Database=' + @ba_bm_status + ''',' + '''SELECT * FROM [qry_BA_Controlling (Report)$]'')'); @ba_bm_status - i/p parameter of srored procedure qry_BA_Controlling (Report) - worksheet name webserver used:- IIS, connection is through odbc. I have no information about this error. Can you please help me in solving the same.

    Read the article

  • MVC2 Apps (and others) sharing WCF services and authentication

    - by stupid-phil
    Hi, I've seen several similar scenarios explained here but not my particular one. I wonder if someone could tell me which direction to go in? I am developing two (and more later) MVC2 apps. There will also be another (thicker) client later on (WPF or Silverlight, TBD). These all need to share the same authentication. For the MVC2 apps they (preferably) need to be single log on - ie if a user logs in to one MVC2 app, they should be authorised on the other, as long as the cookie hasn't timed out. Forms authentication is to be used. All the apps need to use common business functionality and perform db access via a common WCF Service App. It would be nice (I think) if the WCF is not publicly accessible (ie blocked behind FW). The thicker client could use an additional service layer to access the Common WCF App. What this should look like is: MVCApp1 - WCFAppCommon MVCApp2 - WCFAppCommon ThickClient - WCFApp2 - WCFAppCommon Is it possible to carry out all the authentication/authorization in the WCFAppCommon? Otherwise I think I'll have to repeat all the security logic in the MVCApps and WCFApp2, whereas, to me, it seems to sit naturally in WCFAppCommon. On the otherhand, it seems if I authenticate/authorize in WCFAppCommon, I wouldn't be able to use Forms Authentication. Where I've seen possible solutions (that I haven't tried yet) they seem much more complex than Forms Authentication and a single DB. Any help appreciated, Phil

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • WiX: Prevent 32-bit installer from running on 64-bit Windows

    - by Tom the Junglist
    Hi everyone, Due to user confusion, our app requires separate installers for 32-bit and 64-bit versions of Windows. While the 32-bit installer runs fine on win64, it has the potential to create support headaches and we would like to prevent this from happening. I want to prevent the 32-bit MSI installer from running on 64-bit Windows machines. To that end I have the following condition: <Condition Message="You are attempting to run the 32-bit installer on a 64-bit version of Windows."> <![CDATA[Msix64 AND (NOT Win64)]]> </Condition> With the Win64 defined like this: <?if $(var.Platform) = "x64"?> <?define PlatformString = "64-bit"?> <?define Win64 ?> <?else?> <?define PlatformString = "32-bit"?> <?endif?> Thing is, I can't get this check to work right. Either it fires all the time, or none of the time. The goal is to check presence of the run-time msix64 variable against the compile-time Win64 variable and throw an error if these don't line up, but the logic is not working how I intend it to. Has anyone come up with a better solution? Thanks! Tom

    Read the article

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • Injecting Mockito mocks into a Spring bean

    - by teabot
    I would like to inject a Mockito mock object into a Spring (3+) bean for the purposes of unit testing with JUnit. My bean dependencies are currently injected by using the @Autowired annotation on private member fields. I have considered using ReflectionTestUtils.setField but the bean instance that I wish to inject is actually a proxy and hence does not declare the private member fields of the target class. I do not wish to create a public setter to the dependency as I will then be modifying my interface purely for the purposes of testing. I have followed some advice given by the Spring community but the mock does not get created and the auto-wiring fails: <bean id="dao" class="org.mockito.Mockito" factory-method="mock"> <constructor-arg value="com.package.Dao" /> </bean> The error I currently encounter is as follows: ... Caused by: org...NoSuchBeanDefinitionException: No matching bean of type [com.package.Dao] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: { @org...Autowired(required=true), @org...Qualifier(value=dao) } at org...DefaultListableBeanFactory.raiseNoSuchBeanDefinitionException(D...y.java:901) at org...DefaultListableBeanFactory.doResolveDependency(D...y.java:770) If I set the constructor-arg value to something invalid no error occurs when starting the application context.

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • ASP.NET MVC2 Access-Control: How to do authorization dynamically?

    - by Shaharyar
    We're currently rewriting our organizations ASP.NET MVC application which has been written twice already. (Once MVC1, once MVC2). (Thank god it wasn't production ready and too mature back then). This time, anyhow, it's going to be the real deal because we'll be implementing more and more features as time passes and the testruns with MVC1 and MVC2 showed that we're ready to upscale. Until now we were using Controller and Action authorization with AuthorizeAttribute's. But that won't do it any longer because our views are supposed to show different results based on the logged in user. Use Case: Let's say you're a major of a city and you login to a federal managed software and you can only access and edit the citizens in your city. Where you are allowed to access those citizens via an entry in a specialized MajorHasRightsForCity table containing a MajorId and a CityId. What I thought of is something like this: Public ViewResult Edit(int cityId) { if(Access.UserCanEditCity(currentUser, cityId) { var currentCity = Db.Cities.Single(c => c.id == cityId); Return View(currentCity); } else { TempData["ErrorMessage"] = "Yo are not awesome enough to edit that shizzle!" Return View(); } The static class Access would do all kinds of checks and return either true or false from it's methods. This implies that I would need to change and edit all of my controllers every time I change something. (Which would be a pain, because all unit tests would need to be adjusted every time something changes..) Is doing something like that even allowed?

    Read the article

  • Pass object to dynamically loaded ListView ItemTemplate

    - by mickyjtwin
    I have been following the following post on using multiple ItemTemplates in a ListView control. While following this example does produce output, I am trying ot figure out how to psas an object through to the ItemTemplate's user control, which I do not seem to be able to do/figure out. protected void lvwComments_OnItemCreated(object sender, ListViewItemEventArgs e) { ListViewDataItem currentItem = (e.Item as ListViewDataItem); Comment comment = (Comment)currentItem.DataItem; if (comment == null) return; string controlPath = string.Empty; switch (comment.Type) { case CommentType.User: controlPath = "~/layouts/controls/General Comment.ascx"; break; case CommentType.Official: controlPath = "~/layouts/controls/Official Comment.ascx"; break; } lvwComments.ItemTemplate = LoadTemplate(Controlpath); } The User Control is as follows: public partial class OfficialComment : UserControl { protected void Page_Load(object sender, EventArgs e) { } } In the example, the values are being output in the ascx page: <%# Eval("ItemName") %> however I need to access the ListItem in this control to do other logic. I cannot figure out how I to send through my Comment item. The sender object and EventArgs do not contain the info.

    Read the article

  • MVP, WinForms - how to avoid bloated view, presenter and presentation model

    - by MatteS
    When implementing MVP pattern in winforms I often find bloated view interfaces with too many properties, setters and getters. An easy example with be a view with 3 buttons and 7 textboxes, all having value, enabled and visible properties exposed from the view. Adding validation results for this, and you could easily end up with an interface with 40ish properties. Using the Presentation Model, there'll be a model with the same number of properties aswell. How do you easily sync the view and the presentation model without having bloated presenter logic that pass all the values back and forth? (With that 80ish line presenter code, imagine with the presenter test that mocks the model and view will look like..160ish lines of code just to mock that transfer.) Is there any framework to handle this without resorting to winforms databinding? (you might want to use different views than a winforms view. According to some, this sync should be the presenters job..) Would you use AutoMapper? Maybe im asking the wrong questions, but it seems to me MVP easily gets bloated without some good solution here..

    Read the article

  • SQLiteQueryBuilder.buildQuery not using selectArgs?

    - by user297468
    Alright, I'm trying to query a sqlite database. I was trying to be good and use the query method of SQLiteDatabase and pass in the values in the selectArgs parameter to ensure everything got properly escaped, but it wouldn't work. I never got any rows returned (no errors, either). I started getting curious about the SQL that this generated so I did some more poking around and found SQLiteQueryBuilder (and apparently Stack Overflow doesn't handle links with parentheses in them well, so I can't link to the anchor for the buildQuery method), which I assume uses the same logic to generate the SQL statement. I did this: SQLiteQueryBuilder builder = new SQLiteQueryBuilder(); builder.setTables(BarcodeDb.Barcodes.TABLE_NAME); String sql = builder.buildQuery(new String[] { BarcodeDb.Barcodes.ID, BarcodeDb.Barcodes.TIMESTAMP, BarcodeDb.Barcodes.TYPE, BarcodeDb.Barcodes.VALUE }, "? = '?' AND ? = '?'", new String[] { BarcodeDb.Barcodes.VALUE, barcode.getValue(), BarcodeDb.Barcodes.TYPE, barcode.getType()}, null, null, null, null); Log.d(tag, "Query is: " + sql); The SQL that gets logged at this point is: SELECT _id, timestamp, type, value FROM barcodes WHERE (? = '?' AND ? = '?') However, here's what the documentation for SQLiteQueryBuilder.buildQuery says about the selectAgs parameter: You may include ?s in selection, which will be replaced by the values from selectionArgs, in order that they appear in the selection. ...but it isn't working. Any ideas?

    Read the article

  • nHibernate Mapping with Oracle Varchar2 Data Types

    - by Blake Blackwell
    I am new to nHibernate and having some issues getting over the learning curve. My current question involves passing a string value as a parameter to a stored sproc. The error I get is: Input string is not in correct format. My mapping file looks like this: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyCompany.MyProject.Core" namespace="MyCompany.MyProject.Core" > <class name="MyCompany.MyProject.Core.MyTable" table="My_Table" lazy="false"> <id name="Id" column="Id"></id> <property name="Name" column="Name" /> </class> <sql-query name="sp_GetTable" callable="true"> <query-param name="int_Id" type="int"/> <query-param name="vch_MyId" type="String"/> <return class="MyCompany.MyProject.Core.MyTable" /> call procedure MYPKG.MYPROC(:int_Id,:vch_MyId) </sql-query> </hibernate-mapping> When I debug nHibernate it looks like it is not an actual string value, but instead just an object value. Not sure about that though... EDIT: Adding additional code for clarification: UNIT Test List<ProcedureParameter> parms = new List<ProcedureParameter>(); parms.Add( new ProcedureParameter { ParamName = "int_Id", ParamValue = 1} ); parms.Add( new ProcedureParameter { ParamName = "vch_MyId", ParamValue = "{D18BED07-84AB-494F-A94F-6F894E284227}" } ); try { IList<MyTable> myTables = _context.GetAllByID<MyTable>( "sp_GetTable", parms ); Assert.AreNotEqual( 0, myTables.Count ); } catch( Exception ex ) { throw ex; } Data Context Method IQuery query = _session.GetNamedQuery( queryName ); foreach( ProcedureParameter parm in parms ) { query.SetParameter(parm.ParamName, "'" + parm.ParamValue + "'"); } return query.List<T>(); Come to think of it, it may have something to do with my DataContext method.

    Read the article

  • LINQ to Entity, using a SQL LIKE operator

    - by Mario
    I have a LINQ to ENTITY query that pulls from a table, but I need to be able to create a "fuzzy" type search. So I need to add a where clause that searches by lastname IF they add the criteria in the search box (Textbox, CAN be blank --- in which case it pulls EVERYTHING). Here is what I have so far: var query = from mem in context.Member orderby mem.LastName, mem.FirstName select new { FirstName = mem.FirstName, LastName = mem.LastName, }; That will pull everything out of the Member table that is in the Entity object. Then I have an addition to the logic: sLastName = formCollection["FuzzyLastName"].ToString(); if (!String.IsNullOrEmpty(sLastName)) query = query.Where(ln => ln.LastName.Contains(sLastName)); The problem is when the search button is pressed, nothing is returned (0 results). I have run the query against the SQL Server that I expect to happen here and it returns 6 results. This is the query I expect: SELECT mem.LastName, mem.FirstName FROM Members mem WHERE mem.LastName = 'xxx' (when xxx is entered into the textbox) Anyone see anything wrong with this?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >