Search Results

Search found 18151 results on 727 pages for 'upside down'.

Page 568/727 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Generate colors between red and green for a power meter?

    - by Simucal
    I'm writing a java game and I want to implement a power meter for how hard you are going to shoot something. I need to write a function that takes a int between 0 - 100, and based on how high that number is, it will return a color between Green (0 on the power scale) and Red (100 on the power scale). Similar to how volume controls work: What operation do I need to do on the Red, Green, and Blue components of a color to generate the colors between Green and Red? So, I could run say, getColor(80) and it will return an orangish color (its values in R, G, B) or getColor(10) which will return a more Green/Yellow rgb value. I know I need to increase components of the R, G, B values for a new color, but I don't know specifically what goes up or down as the colors shift from Green-Red. Progress: I ended up using HSV/HSB color space because I liked the gradiant better (no dark browns in the middle). The function I used was (in java): public Color getColor(double power) { double H = power * 0.4; // Hue (note 0.4 = Green, see huge chart below) double S = 0.9; // Saturation double B = 0.9; // Brightness return Color.getHSBColor((float)H, (float)S, (float)B); } Where "power" is a number between 0.0 and 1.0. 0.0 will return a bright red, 1.0 will return a bright green. Java Hue Chart: Thanks everyone for helping me with this!

    Read the article

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • Connection reset when calling disconnect() using enterprisedt's ftp java framework

    - by Frederik Wordenskjold
    I'm having trouble disconnecting from a ftp-server, using the enterprisedt java ftp framework. I can simply not call disconnect() on a FileTransferClient object without getting an error. I do not do anything, besides connecting to the server, and then disconnecting: // create client log.info("Creating FTP client"); ftp = new FileTransferClient(); // set remote host log.info("Setting remote host"); ftp.setRemoteHost(host); ftp.setUserName(username); ftp.setPassword(password); // connect to the server log.info("Connecting to server " + host); ftp.connect(); log.info("Connected and logged in to server " + host); // Shut down client log.info("Quitting client"); ftp.disconnect(); log.info("Example complete"); When running this, the log reads: INFO [test] 28 maj 2010 16:57:20.216 : Creating FTP client INFO [test] 28 maj 2010 16:57:20.263 : Setting remote host INFO [test] 28 maj 2010 16:57:20.263 : Connecting to server x INFO [test] 28 maj 2010 16:57:20.979 : Connected and logged in to server x INFO [test] 28 maj 2010 16:57:20.979 : Quitting client ERROR [FTPControlSocket] 28 maj 2010 16:57:21.026 : Read failed ('' read so far) And the stacktrace: com.enterprisedt.net.ftp.ControlChannelIOException: Connection reset at com.enterprisedt.net.ftp.FTPControlSocket.readLine(FTPControlSocket.java:1029) at com.enterprisedt.net.ftp.FTPControlSocket.readReply(FTPControlSocket.java:1089) at com.enterprisedt.net.ftp.FTPControlSocket.sendCommand(FTPControlSocket.java:988) at com.enterprisedt.net.ftp.FTPClient.quit(FTPClient.java:4044) at com.enterprisedt.net.ftp.FileTransferClient.disconnect(FileTransferClient.java:1034) at test.main(test.java:46) It should be noted, that I without problems can connect, and do stuff with the server, like getting a list of files in the current working directory. But I cant, for some reason, disconnect! I've tried using both active and passive mode. The above example is by the way copy/pasted from their own example. I cannot fint ANYTHING related to this by doing a Google-search, so I was hoping you have any suggestions, or experience with this issue.

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • How to make Facebook Authentication from Silverlight secure?

    - by SondreB
    I have the following scenario I want to complete: Website running some HTTP(S) services that returns data for a user. Same website is additionally hosting a Silverlight 4 app which calls these services. The Silverlight app is integrating with Facebook using the Facebook Developer Toolkit (http://facebooktoolkit.codeplex.com/). I have not fully decided whether I want Facebook-integration to be a "opt-in" option such as Spotify, or if I want to "lock" down my service with Facebook-only authentication. That's another discussion. How do I protect my API Key and Secret that I receive from Facebook in a Silverlight app? To me it's obvious that this is impossible as the code is running on the client, but is there a way I can make it harder or should I just live with the fact that third parties could potentially "act" as my own app? Using the Facebook Developer Toolkit, there is a following C# method in Silverlight that is executed from the JavaScript when the user has fully authenticated with Facebook using the Facebook Connect APIs. [ScriptableMember] public void LoggedIn(string sessionKey, string secret, int expires, long userId) { this.SessionKey = sessionKey; this.UserId = userId; Obvious the problem here is the fact that JavaScript is injection the userId, which is nothing but a simple number. This means anyone could potentially inject a different userId in JavaScript and have my app think it's someone else. This means someone could hijack the data within the services running on my website. The alternative that comes to mind is authenticating the users on my website, this way I'm never exposing any secrets and I can return an auth-cookie to the users after the initial authentication. Though this scenario doesn't work very well in an out-of-browser scenario where the user is running the Silverlight app locally and not from my website.

    Read the article

  • OutOfMemoryException

    - by Andrew
    I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays. I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it. I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now. I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException. This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object. Is my thinking correct here?

    Read the article

  • Is this correct for disposing an object using IDisposable

    - by robUK
    I have a class that implements the IDisposable interface. I am using a webclient to download some data using the AsyncDownloadString. I am wondering have I correctly declared my event handlers in the constructor and within the using statement of the web client? And is this correct way to remove the event handlers in the Dispose method? Overrule is this the correct way to use the IDisposable interface? public class Balance : IDisposable { //Constructor WebClient wc; public Balance() { using (wc = new WebClient()) { //Create event handler for the progress changed and download completed events wc.DownloadProgressChanged += new DownloadProgressChangedEventHandler(wc_DownloadProgressChanged); wc.DownloadStringCompleted += new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted); } } ~Balance() { this.Dispose(false); } //Get the current balance for the user that is logged in. //If the balance returned from the server is NULL display error to the user. //Null could occur if the DB has been stopped or the server is down. public void GetBalance(string sipUsername) { //Remove the underscore ( _ ) from the username, as this is not needed to get the balance. sipUsername = sipUsername.Remove(0, 1); string strURL = string.Format("https://www.xxxxxxx.com", sipUsername); //Download only when the webclient is not busy. if (!wc.IsBusy) { // Download the current balance. wc.DownloadStringAsync(new Uri(strURL)); } else { Console.Write("Busy please try again"); } } //return and display the balance after the download has fully completed void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { //Pass the result to the event handler } //Dispose of the balance object public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } //Remove the event handlers private bool isDisposed = false; private void Dispose(bool disposing) { if (!this.isDisposed) { if (disposing) { wc.DownloadProgressChanged -= new DownloadProgressChangedEventHandler(wc_DownloadProgressChanged); wc.DownloadStringCompleted -= new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted); wc.Dispose(); } isDisposed = true; } } }

    Read the article

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • Reading the selected value from asp:RadioButtonList using jQuery

    - by digiguru
    I've got code similar to the following... <p><label>Do you have buffet facilities?</label> <asp:RadioButtonList ID="blnBuffetMealFacilities:chk" runat="server"> <asp:ListItem Text="Yes" Value="1"></asp:ListItem> <asp:ListItem Text="No" Value="0"></asp:ListItem> </asp:RadioButtonList></p> <div id="HasBuffet"> <p><label>What is the capacity for the buffet?</label> <asp:RadioButtonList ID="radBuffetCapacity" runat="server"> <asp:ListItem Text="Suitable for upto 30 guests" value="0 to 30"></asp:ListItem> <asp:ListItem Text="Suitable for upto 50 guests" value="30 to 50"></asp:ListItem> <asp:ListItem Text="Suitable for upto 75 guests" value="50 to 75"></asp:ListItem> <asp:ListItem Text="Suitable for upto 100 guests" value="75 to 100"></asp:ListItem> <asp:ListItem Text="Suitable for upto 150 guests" value="100 to 150"></asp:ListItem> <asp:ListItem Text="Suitable for upto 250 guests" value="150 to 250"></asp:ListItem> <asp:ListItem Text="Suitable for upto 400 guests" value="250 to 400"></asp:ListItem> </asp:RadioButtonList></p> </div> I want to capture an event when the radio list blBuffetMealFacilities:chk changes client side and perform a slide down function on the HasBuffet div from jQuery. What's the best way to create this, bearing in mind there are several similar sections on the page, where I want questions to be revealed depending on a yes no answer in a radio list.

    Read the article

  • Unit tests and Test Runner problems under .Net 4.0

    - by Brett Rigby
    Hi there, We're trying to migrate a .Net 3.5 solution into .Net 4.0, but are experiencing complications with the testing frameworks that can operate using an assembly that is built using version 4.0 of the .Net Framework. Previously, we used NUnit 2.4.3.0 and NCover 1.5.8.0 within our NAnt scripts, but NUnit 2.4.3.0 doesn't like .Net 4.0 projects. So, we upgraded to a newer version of the NUnit framework within the test project itself, but then found that NCover 1.5.8.0 doesn't support this version of NUnit. We get errors in the code saying words to the effect of the assembly was built using a newer version of the .Net Framework than is currently in use, as it's using .Net Framework 2.0 to run the tools. We then tried using Gallio's Icarus test runner GUI, but found that this and MbUnit only support up to version 3.5 of the .Net Frameword and the result is "the tests will be ignored". In terms of the coverage side of things (for reporting into CruiseControl.net), we have found that PartCover is a good candidate for substituting-out NCover, (as the newer version of NCover is quite dear, and PartCover is free), but this is a few steps down the line yet, as we can't get the test runners to work first!! Can any shed any light on a testnig framework that will run under .Net 4.0 in the same way as I've described above? If not, I fear we may have to revert back to using .Net 3.5 until the manufacturers of the tooling that we're currently using have a chance to upgrade to .Net 4.0. Thanks.

    Read the article

  • Bringing ideas to life

    - by danixd
    I understand that this forum is filled with computer science genius's but there must come a point where your expertise in coding can not keep up with your creativity. I am a front end programmer and designer of sorts. I have so many ideas, with no ways of implementing them. I know of platforms that can help, I am really into the arduino project for physical ideas and I know Flash is an amazing platform for software, but when it comes to complex ideas, even such as a 3D game, things can be too much to handle individually. When you have an idea, what is your typical methodology of bring them to life? Write down the idea, save if for later (never gets made right...) Try building it yourself, on the platforms you work with and are good at Consult people on what platform it is best suited to and collaborate with an expert in that field to do the dirty work Consult people on what platform it is best suited to and try to learn it and make it yourself (at least the alpha stages) Are there any communities that support this idea of bringing things to life? Can you pitch ideas to x company as a business model and hope they take you on / sponsor you? Do you have to spend lots of $$$ to get things made?

    Read the article

  • What programming hack from your past are you most ashamed of?

    - by LeopardSkinPillBoxHat
    We've all been there (usually when we are young and inexperienced). Fixing it properly is too difficult, too risky or too time-consuming. So you go down the hack path. Which hack from your past are you most ashamed of, and why? I'm talking about the ones where you would be really embarrassed if someone could attribute the hack to you (quite easily if you are using revision control software). One hack per answer please. Mine was shortly after I started in my first job. I was working on a legacy C system, and there was this strange defect where a screen view failed to update properly under certain circumstances. I wasn't familiar with how to use the debugger at this time, so I added traces into the code to figure out what was going on. Then I realised that the defect didn't occur anymore with the traces in the code. I slowly backed out the traces one-by-one, until I realised that only a single trace was required to make the problem go away. My logic now would tell me that I was dealing with some sort of race-condition or timing related issue that the trace just "hid under the rug". But I checked in the code with the following line, and all was well: printf(""); Which hacks are you ashamed of?

    Read the article

  • What is the right approach to checksumming UDP packets

    - by mr.b
    I'm building UDP server application in C#. I've come across a packet checksum problem. As you probably know, each packet should carry some simple way of telling receiver if packet data is intact. Now, UDP already has 2-byte checksum as part of header, which is optional, at least in IPv4 world. Alternative method is to have custom checksum as part of data section in each packet, and to verify it on receiver. My question boils down to: is it better to rely on (optional) checksum in UDP packet header, or to make a custom checksum implementation as part of packet data section? Perhaps the right answer depends on circumstances (as usual), so one circumstance here is that, even though code is written and developed in .NET on Windows, it might have to run under platform-independent Mono.NET, so eventual solution should be compatible with other platforms. I believe that custom checksum algorithm would be easily portable, but I'm not so sure about the first one. Any thoughts? Also, shouts about packet checksumming in general are welcome.

    Read the article

  • Find position of a node within a nodeset using xpath

    - by Phil Nash
    After playing around with position() in vain I was googling around for a solution and arrived at this older stackoverflow question which almost describes my problem. The difference is that the nodeset I want the position within is dynamic, rather than a contiguous section of the document. To illustrate I'll modify the example from the linked question to match my requirements. Note that each <b> element is within a different <a> element. This is the critical bit. <root <a <bzyx</b </a <a <bwvu</b </a <a <btsr</b </a <a <bqpo</b </a </root Now if I queried, using XPath for a/b I'd get a nodeset of the four <b> nodes. I want to then find the position within that nodeset of the node that contains the string 'tsr'. The solution in the other post breaks down here: count(a/b[.='tsr']/preceding-sibling::*)+1 returns 1 because preceding-sibling is navigating the document rather than the context node-set. Is it possible to work within the context nodeset?

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • JConsole does not connect when a program is waiting for a console input

    - by Calm Storm
    Hi, I have a problem similar to the one mentioned here. I have a program which launches a selenium browser and refreshes it every 2 seconds. My idea here was to monitor the memory usage of my application as the page is being hit. This console program does a "System.in.read" and when a user presses enter key, this stops the browser refresh thread. I run the program and it runs as I would expect it to. But when I fire jconsole and attach it to the process, it looks like jconsole waits forever to attach. This is because of the "System.in.read", apparently as shown in the JConsole source. In my code it makes perfect sense for me to terminte the browser when user presses something so is there any alternative available at all ? (Posted code sample below) package com.ekanathk; import java.util.ArrayList; import java.util.List; import org.junit.Test; public class MultiThread { @Test public void testSimple() throws Exception { List<Integer> x = new ArrayList<Integer>(); for(int i = 0; i < 1000; i++) { x.add(i); } System.out.print("Press Enter to stop (try attach a jconsole now...."); System.in.read(); } } The problem here seems to be that any System.in.read (either in Main thread or any threads) seems to completely block jconsole. I guess the problem essentially boils down to multiple threads requesting system inputs at the same time? Is there a solution ?

    Read the article

  • iTextSharp Conversion from Table to pdfPTable

    - by Al.
    I have an old ASP.NET project originally done in ASP.NET 1.1 w/ iText.NET and converted to .NET 2.0 and iTextSharp 4.1.6.0. It uses lots of Table (I'm assuming pdfptable wasn't an option at the time it was created.) I am trying to convert this code to use the latest iTextSharp 5.0.0 dll and now see Table and cell have been removed. I started converting it anyway and soon found there is no equivalent to a lot of the functionality that Table offered. Mainly AddCell no longer allows a col,row setting. There are literally thousands of these calls in this code and the posibility of changing it to generate linearly row by row looks hopeless at the moment. The current code looks something like: Dim myTable As New Table(NumReq + 2, IngDS.Tables(0).Rows.Count + 3) myTable.SetWidths(Width) myTable.Width = 100 myTable.Padding = 2 myCell = New Cell(New Phrase("Some Text", New iTextSharp.text.Font(iTextSharp.text.Font.HELVETICA, 8, iTextSharp.text.Font.NORMAL, iTextSharp.text.Color.BLACK))) myCell.SetHorizontalAlignment(Element.ALIGN_RIGHT) myCell.GrayFill = 0.75 myTable.AddCell(myCell, Row, Col) myCell = New Cell(New Phrase("Other Text",New iTextSharp.text.Font(iTextSharp.text.Font.HELVETICA, 8, iTextSharp.text.Font.NORMAL, iTextSharp.text.Color.BLACK))) myCell.GrayFill = 0.75 myTable.AddCell(myCell, Row, Col+1) Before I embark down that road I was hoping someone would be able to point me in a direction that I'm just totally missing that will make this conversion much more simple. Any ideas? Thanks.

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • SDL Tridion Schema Field "List of Links" Options

    - by Alvin Reyes
    I'm looking to create an SDL Tridion schema with a list of repeatable links while avoiding multiple fields per link. Hyperlink In a rich text field I have the following options for creating a hyperlink:* Component Anchor http:// mailto: Other When content authors create one of these hyperlinks, they have the option to select linked (visible) text as well as title and target attributes that function like typical HTML hyperlinks. "Richtext" means a Text field with Height of the Text Area = at least 2 rows with Allow Rich Text Formatting selected. Single Schema Field Link When creating a single schema field, I see these options: External Link (author options will include http://, mailto, Other) Multimedia Link Component Link (which can allow Multimedia Values) Current Ideas The best out-of-the-box (OOTB) setups I've found for this "list of links" is either offering: a single 2-line RTF with instructions to create a hyperlink (of any type) in that field separate fields for each type as well as additional fields for display name, target, and title (where the fields are assembled through template code), authors fill in only one of the fields (component link or external) Question Is there a way in the schema form designer, by updating the schema source, or through code to offer the same (RTF) hyperlink drop-down options, but in a single field? I could be missing something, but recognize this scenario isn't supported OOTB.

    Read the article

  • Combining data sets without losing observations in SAS

    - by John
    Hye guys, I know, another post another problem :D :(. I took a screenshot to easily explain my problem. http://i39.tinypic.com/rhms0h.jpg As you can see I want to merge two tables (again), the Base & Analyst table. What I want to achieve is displayed in the right bottom corner table. I’m calculating the number of total analysts and female analysts for each month in the analyst table. In the base table I have different observations for one company (here company Alcoa with ticker AA). When I use the following command: data want; merge base analyst; by month ; run; I get the right up corner problem. My observations in the main table are being narrowed down to only 4 observations (for each different year one observation, 2001, 2002, 2005, 2006). What I want is that the observations are not reduced but that for every year the same data is being placed as shown in the right bottom corner. What am I missing in my merge command? In both tables I have month as a time count variable ( the observations in my base table are monthly) on which I need to merge. For clarity I added 2 screenshots of my real databases in SAS. The base table: http://i42.tinypic.com/dr5jky.jpg The analyst table: http://i40.tinypic.com/eqpmqq.jpg Here is what my merged table looks like: http://i43.tinypic.com/116i62s.jpg You can clearly see that the merged table only has four observations left for AA (one for each unique year) instead of the original 8. Anyone an idea to solve this?

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • What headaches should I expect from using Trac?

    - by Dolph Mathews
    No tool is perfect, and I'm about to start several long-term projects using Trac, and wanted a heads up of the kinds of problems I may or may not experience with it. In other words, Trac meets my needs in the short term, and I've already made the decision to use it, but I want to know what to expect down the road. I am not looking for: "Use product X instead of Trac because..." answers. "Trac is great because..." answers. A comparison to any other specific system. "Trac doesn't support Feature X" answers. I can read the feature list too, thank you very much. I am looking for: "Feature X does not behave as expected..." "Trac behaves oddly when..." "Trac doesn't fully support..." "Trac itself has a known bug that will likely never be fixed..." And especially "Trac can't handle..." etc So, what Trac-induced headaches do I have to look forward to? For future reference, this question was asked while Trac v0.11 was the latest stable release.

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >