Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 708/842 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • SQL Server - Complex Dynamic Pivot columns

    - by user972255
    I have two tables "Controls" and "ControlChilds" Parent Table Structure: Create table Controls( ProjectID Varchar(20) NOT NULL, ControlID INT NOT NULL, ControlCode Varchar(2) NOT NULL, ControlPoint Decimal NULL, ControlScore Decimal NULL, ControlValue Varchar(50) ) Sample Data ProjectID | ControlID | ControlCode | ControlPoint | ControlScore | ControlValue P001 1 A 30.44 65 Invalid P001 2 C 45.30 85 Valid Child Table Structure: Create table ControlChilds( ControlID INT NOT NULL, ControlChildID INT NOT NULL, ControlChildValue Varchar(200) NULL ) Sample Data ControlID | ControlChildID | ControlChildValue 1 100 Yes 1 101 No 1 102 NA 1 103 Others 2 104 Yes 2 105 SomeValue Output should be in a single row for a given ProjectID with all its Control values first & followed by child control values (based on the ControlCode (i.e.) ControlCode_Child (1, 2, 3...) and it should look like this Also, I tried this PIVOT query and I am able to get the ChildControls table values but I dont know how to get the Controls table values. DECLARE @cols AS NVARCHAR(MAX); DECLARE @query AS NVARCHAR(MAX); select @cols = STUFF((SELECT distinct ',' + QUOTENAME(ControlCode + '_Child' + CAST(ROW_NUMBER() over(PARTITION BY ControlCode ORDER BY ControlChildID) AS Varchar(25))) FROM Controls C INNER JOIN ControlChilds CC ON C.ControlID = CC.ControlID FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') , 1, 1, ''); SELECT @query ='SELECT * FROM ( SELECT (ControlCode + ''_Child'' + CAST(ROW_NUMBER() over(PARTITION BY ControlCode ORDER BY ControlChildID) AS Varchar(25))) As Code, ControlChildValue FROM Controls AS C INNER JOIN ControlChilds AS CC ON C.ControlID = CC.ControlID ) AS t PIVOT ( MAX(ControlChildValue) FOR Code IN( ' + @cols + ' )' + ' ) AS p ; '; execute(@query); Output I am getting: Can anyone please help me on how to get the Controls table values in front of each ControlChilds table values?

    Read the article

  • Is there a way to optimize this mysql query...?

    - by SpikETidE
    Hi Everyone... Say, I got these two tables.... Table 1 : Hotels hotel_id hotel_name 1 abc 2 xyz 3 efg Table 2 : Payments payment_id payment_date hotel_id total_amt comission p1 23-03-2010 1 100 10 p2 23-03-2010 2 50 5 p3 23-03-2010 2 200 25 p4 23-03-2010 1 40 2 Now, I need to get the following details from the two tables Given a particular date (say, 23-03-2010), the sum of the total_amt for each of the hotel for which a payment has been made on that particular date. All the rows that has the date 23-03-2010 ordered according to the hotel name A sample output is as follows... +------------+------------+------------+---------------+ | hotel_name | date | total_amt | commission | +------------+------------+------------+---------------+ | * abc | 23-03-2010 | 140 | 12 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p1 | 23-03-2010 | 100 | 10 || |+-----------+------------+------------+--------------+| || p4 | 23-03-2010 | 40 | 2 || |+-----------+------------+------------+--------------+| +------------+------------+------------+---------------+ | * xyz | 23-03-2010 | 250 | 30 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p2 | 23-03-2010 | 50 | 5 || |+-----------+------------+------------+--------------+| || p3 | 23-03-2010 | 200 | 25 || |+-----------+------------+------------+--------------+| +------------------------------------------------------+ Above the sample of the table that has to be printed... The idea is first to show the consolidated detail of each hotel, and when the '*' next to the hotel name is clicked the breakdown of the payment details will become visible... But that can be done by some jquery..!!! The table itself can be generated with php... Right now i am using two separate queries : One to get the sum of the amount and commission grouped by the hotel name. The next is to get the individual row for each entry having that date in the table. This is, of course, because grouping the records for calculating sum() returns only one row for each of the hotel with the sum of the amounts... Is there a way to combine these two queries into a single one and do the operation in a more optimized way...?? Hope i am being clear.. Thanks for your time and replies...

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • Nested parsers in happy / infinite loop?

    - by McManiaC
    I'm trying to write a parser for a simple markup language with happy. Currently, I'm having some issues with infinit loops and nested elements. My markup language basicly consists of two elements, one for "normal" text and one for bold/emphasized text. data Markup = MarkupText String | MarkupEmph [Markup] For example, a text like Foo *bar* should get parsed as [MarkupText "Foo ", MarkupEmph [MarkupText "bar"]]. Lexing of that example works fine, but the parsing it results in an infinite loop - and I can't see why. This is my current approach: -- The main parser: Parsing a list of "Markup" Markups :: { [Markup] } : Markups Markup { $1 ++ [$2] } | Markup { [$1] } -- One single markup element Markup :: { Markup } : '*' Markups1 '*' { MarkupEmph $2 } | Markup1 { $1 } -- The nested list inside *..* Markups1 :: { [Markup] } : Markups1 Markup1 { $1 ++ [$2] } | Markup1 { [$1] } -- Markup which is always available: Markup1 :: { Markup } : String { MarkupText $1 } What's wrong with that approach? How could the be resolved? Update: Sorry. Lexing wasn't working as expected. The infinit loop was inside the lexer. Sorry. :) Update 2: On request, I'm using this as lexer: lexer :: String -> [Token] lexer [] = [] lexer str@(c:cs) | c == '*' = TokenSymbol "*" : lexer cs -- ...more rules... | otherwise = TokenString val : lexer rest where (val, rest) = span isValidChar str isValidChar = (/= '*') The infinit recursion occured because I had lexer str instead of lexer cs in that first rule for '*'. Didn't see it because my actual code was a bit more complex. :)

    Read the article

  • Simple way to embed an MP3 audio file in a (static) HTML file, starting to play at a specifc time?

    - by Marcel
    Hi all, I want to produce a simple, static HTML file, that has one or more embedded MP3 files in it. The idea is to have a simple mean of listening to specific parts of an mp3 file. On a single click on a visual element, the sound should start to play; However, not from the beginning of the file, but from a specified starting point in that file (and play to the end of the file). This should work all locally from the client's local filesystem, the HTML file and the MP3 files do not reside on a webserver. So, how to play the MP3 audio from a specific starting point? The solution I am looking for should be as simple as possible, while working on most browsers, including IE, Firefox and Safari. Note: I know the <embed> tag as described here, but this seems not to work with my target browsers. Also I have read about jPlayer and other Java-Script-based players, but since I have never coded some JavaScript, I would prefer a HTML-only solution, if possible.

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • How to restart a wcf server from within a client?

    - by djerry
    Hey guys, I'm using Wcf for server - client communication. The server will need to run as a service, so there's no GUI. The admin can change settings using the client program and for those changes to be made on server, it needs to restart. This is my server setup NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://localhost:8000"); //_svc = new ServiceHost(typeof(MonitoringSystemService), address); _monSysService = new MonitoringSystemService(); _svc = new ServiceHost(_monSysService, address); publishMetaData(_svc, "http://localhost:8001"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); MonitoringSystemService is a class i'm using to handle client - server comm. It looks like this: [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, MaxItemsInObjectGraph = 2147483647)] public class MonitoringSystemService : IMonitoringSystemService {} So i need to call a restart method on the client to the server, but i don't know how to restart (even stop - start) the server. I hope i'm not missing any vital information. Thanks in advance.

    Read the article

  • android:junit running Parameterized testcases.

    - by puneetinderkaur
    Hi, i wan to run a single testcase with different parameters. in java it possible through junit4 and code is like this package com.android.test; import static org.junit.Assert.assertEquals; import java.util.Arrays; import java.util.Collection; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) public class ParameterizedTestExample { private int number; private int number2; private int result; private functioncall coun; //constructor takes parameters from array public ParameterizedTestExample(int number, int number2, int result1) { this.number = number; this.number2 = number2; this.result = result1; coun = new functioncall(); } //array with parameters @Parameters public static Collection<Object[]> data() { Object[][] data = new Object[][] { { 1, 4, 5 }, { 2, 7, 9 }, { 3, 7, 10 }, { 4, 9, 13 } }; return Arrays.asList(data); } //running our test @Test public void testCounter() { assertEquals("Result add x y", result, coun.calculateSum(this.number, this.number2), 0); } } Now my question is can we Run the same with android testcases where we have deal with context. when i try to run the code using android junit. it gives me error NoClassDefinationFound.

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • Password Cracking in 2010 and Beyond

    - by mttr
    I have looked a bit into cryptography and related matters during the last couple of days and am pretty confused by now. I have a question about password strength and am hoping that someone can clear up my confusion by sharing how they think through the following questions. I am becoming obsessed about these things, but need to spend my time otherwise :-) Let's assume we have an eight-digit password that consists of upper and lower-case alphabetic characters, numbers and common symbols. This means we have 8^96 ~= 7.2 quadrillion different possible passwords. As I understand there are at least two approaches to breaking this password. One is to try a brute-force attack where we try to guess each possible combination of characters. How many passwords can modern processors (in 2010, Core i7 Extreme for eg) guess per second (how many instructions does a single password guess take and why)? My guess would be that it takes a modern processor in the order of years to break such a password. Another approach would consist of obtaining a hash of my password as stored by operating systems and then search for collisions. Depending on the type of hash used, we might get the password a lot quicker than by the bruteforce attack. A number of questions about this: Is the assertion in the above sentence correct? How do I think about the time it takes to find collisions for MD4, MD5, etc. hashes? Where does my Snow Leopard store my password hash and what hashing algorithm does it use? And finally, regardless of the strength of file encryption using AES-128/256, the weak link is still my en/decryption password used. Even if breaking the ciphered text would take longer than the lifetime of the universe, a brute-force attack on my de/encryption password (guess password, then try to decrypt file, try next password...), might succeed a lot earlier than the end of the universe. Is that correct? I would be very grateful, if people could have mercy on me and help me think through these probably simple questions, so that I can get back to work.

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • Python 3.0 IDE - Komodo and Eclipse both flaky?

    - by victorhooi
    heya, I'm trying to find a decent IDE that supports Python 3.x, and offers code completion/in-built Pydocs viewer, Mercurial integration, and SSH/SFTP support. Anyhow, I'm trying Pydev, and I open up a .py file, it's in the Pydev perspective and the Run As doesn't offer any options. It does when you start a Pydev project, but I don't want to start a project just to edit one single Python script, lol, I want to just open a .py file and have It Just Work... Plan 2, I try Komodo 6 Alpha 2. I actually quite like Komodo, and it's nice and snappy, offers in-built Mercurial support, as well as in-built SSH support (although it lacks SSH HTTP Proxy support, which is slightly annoying). However, for some reason, this refuses to pick up Python 3. In Edit-Preferences-Languages, there's two option, one for Python and Python3, but the Python3 one refuses to work, with either the official Python.org binaries, or ActiveState's own ActivePython 3. Of course, I can set the "Python" interpreter to the 3.1 binary, but that's an ugly hack and breaks Python 2.x support. So, does anybody who uses an IDE for Python have any suggestions on either of these accounts, or can you recommend an alternate IDE for Python 3.0 development? Cheers, Victor

    Read the article

  • MEF GetExports<T, TMetaDataView> returning nothing with AllowMultiple = True

    - by sohum
    I don't understand MEF very well, so hopefully this is a simple fix of how I think it works. I'm trying to use MEF to get some information about a class and how it should be used. I'm using the Metadata options to try to achieve this. My interfaces and attribute looks like this: public interface IMyInterface { } public interface IMyInterfaceInfo { Type SomeProperty1 { get; } double SomeProperty2 { get; } string SomeProperty3 { get; } } [MetadataAttribute] [AttributeUsage(AttributeTargets.Class, AllowMultiple = true)] public class ExportMyInterfaceAttribute : ExportAttribute, IMyInterfaceInfo { public ExportMyInterfaceAttribute(Type someProperty1, double someProperty2, string someProperty3) : base(typeof(IMyInterface)) { SomeProperty1 = someProperty1; SomeProperty2 = someProperty2; SomeProperty3 = someProperty3; } public Type SomeProperty1 { get; set; } public double SomeProperty2 { get; set; } public string SomeProperty3 { get; set; } } The class that is decorated with the attribute looks like this: [ExportMyInterface(typeof(string), 0.1, "whoo data!")] [ExportMyInterface(typeof(int), 0.4, "asdfasdf!!")] public class DecoratedClass : IMyInterface { } The method that is trying to use the import looks like this: private void SomeFunction() { // CompositionContainer is an instance of CompositionContainer var myExports = CompositionContainer.GetExports<IMyInterface, IMyInterfaceInfo>(); } In my case myExports is always empty. In my CompositionContainer, I have a Part in my catalog that has two ExportDefinitions, both with the following ContractName: "MyNamespace.IMyInterface". The Metadata is also loaded correctly per my exports. If I remove the AllowMultiple setter and only include one exported attribute, the myExports variable now has the single export with its loaded metadata. What am I doing wrong?

    Read the article

  • iPhone OS: Is there a way to set up KVO between two ManagedObject Entities?

    - by nickthedude
    I have 2 entities I want to link with KVO, one a single statTracker class that keeps track of different stats and the other an achievement class that contains information about achievements. Ideally what I want to be able to do is set up KVO by having an instance of the achievement class observe a value on the statTracker class and also set up a threshold value at which the achievement instance should be "triggered"(triggering in this case would mean showing a UIAlertView and changing a property on the achievement class.) I'd like to also set these relationships up on instantiation of the achievement class if possible so kind of like this: Achievement *achievement1 = (Achievement *)[NSEntityDescription insertNewObjectForEntityForName:@"Achievement" inManagedObjectContext:[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext]]; [achievement1 setAchievementName:@"2 time launcher"]; [achievement1 setAchievementDescription:@"So you've decided to come back for more eh? Here are some achievement points to get you going"]; [achievement1 setAchievementPoints:[NSNumber numberWithInt:300]; [achievement1 setObjectToObserve:@"statTrackerInstace" propertyToObserve:@"timesLaunched" valueOfPropertToSatisfyAchievement:2] Anyone out there know how I would set this up? Is there some way I could do this by way of relationships that I'm not seeing? Thanks, Nick

    Read the article

  • How can I control the width of JTextFields in Java Swing?

    - by Jonas
    I am trying to have several JTextFields on a single row, but I don't want them to have the same width. How can I control the width and make some of them wider than others? I want that they together take up 100% of the total width, so it would be good if I could use some kind of weigthing. I have tried with .setColumns() but it doesn't make sense. Here is my code: class Row extends JComponent { public Row() { this.setLayout(new BoxLayout(this, BoxLayout.X_AXIS)); JTextField fld1 = new JTextField("First field"); JTextField fld2 = new JTextField("Second field, should be longer"); JTextField fld3 = new JTextField("Short field"); JTextField fld4 = new JTextField("Another field"); fld1.setBorder(null); fld2.setBorder(null); fld3.setBorder(null); fld4.setBorder(null); fld1.setFocusable(false); fld2.setFocusable(false); fld3.setFocusable(false); fld4.setFocusable(false); fld1.setColumns(5); // makes no sense this.add(fld1); this.add(fld2); this.add(fld3); this.add(fld4); } } this.setLayout(new GridLayout(20,0)); this.add(new Row()); this.add(new Row()); this.add(new Row());

    Read the article

  • Simplest way to automatically alter "const" value in Java during compile time

    - by Michael Mao
    Hi all: This is a question corresponds to my uni assignment so I am very sorry to say I cannot adopt any one of the following best practices in a short time -- you know -- assignment is due tomorrow :( link to Best way to alter const in Java on StackOverflow Basically the only task (I hope so) left for me is the performance tuning. I've got a bunch of predefined "const" values in my single-class agent source code like this: //static final values private static final long FT_THRESHOLD = 400; private static final long FT_THRESHOLD_MARGIN = 50; private static final long FT_SMOOTH_PRICE_IDICATOR = 20; private static final long FT_BUY_PRICE_MODIFIER = 0; private static final long FT_LAST_ROUNDS_STARTTIME = 90; private static final long FT_AMOUNT_ADJUST_MODIFIER = 5; private static final long FT_HISTORY_PIRCES_LENGTH = 10; private static final long FT_TRACK_DURATION = 5; private static final int MAX_BED_BID_NUM_PER_AUC = 12; I can definitely alter the values manually and then compile the code to give it another go around. But the execution time for a thorough "statistic analysis" usually requires over 2000 times of execution, which will typically lasts more than half an hour on my own laptop... So I hope there is a way to alter values using other ways than dig into the source code to change the "const" values there, so I can automatically distributed compiled code to other people's PC and let them run the statistic analysis instead. One other reason for a automatically value adjustment is that I can try using my own agent to defeat itself by choosing different "const" values. Although my values are derived from previous history and statistical results, they are far from the most optimized values. I hope there is a easy way so I can quickly adopt that so to have a good sleep tonight while the computer does everything for me... :) Any hints on this sort of stuff? Any suggestion is welcomed and much appreciated.

    Read the article

  • Windows Form user control hosted in WPF - How to capture the leave event?

    - by OKB
    Hi, In my WPF application I'm hosting a custom Windows Form User Control together with other wpf controls. My custom user control is hosted in wpf using a WindowsFormsHost control. This custom user control contains (the parent so to speak) other custom win form controls (children controls). The children controls can be single or composite controls. How can I capture the leave event on a child control when the user navigates from the last child user control in the parent custom user control to a wpf user control? According to MSDN (http://msdn.microsoft.com/en-us/library/ms751797.aspx) the leave event is not supported in following scenarios: Enter and Leave events are not raised when the following focus changes occur: 1. From inside to outside a WindowsFormsHost control. 2. From outside to inside a WindowsFormsHost control. 3. Outside a WindowsFormsHost control. 4. From a Windows Forms control hosted in a WindowsFormsHost control to an ElementHost control hosted inside the same WindowsFormsHost. Scenario 1 and 2 is exactly what I struggle with. Do you have any solution to this problem? Some workaround or anything is appreciated:) Best Regards, OKB

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • Visual Studio 2008 project file does not load because of an unexpected encoding change.

    - by Xenan
    In our team we have a database project in visual Studio 2008 which is under source control by Team Foundation Server. Every two weeks or so, after one co-worker checks in, the project file won't load on the other developers machines. The error message is: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1. When I look at the project file in Notepad++, the file looks like this: ??<NUL?NULxNULmNULlNUL NULvNULeNULrNULsNULiNULoNULnNUL ... and so on (you can see <?xml version in this) whereas an normal project file looks like: <?xml version="1.0" encoding="utf-16"?> ... So probably something is wrong with the encoding of the file. This is a problem for us because it turns out to be impossible to get the file encoding correct again. The 'solution' is to throw away the project file an get the last know working version from source control. According to the file, the encoding should be UTF-16. According to Notepad++, the corrupted file is actually UTF-8. My questions are: Why is Visual Studio messing up the encoding of the project file, apparently at random times and at random machines? What should we do to prevent this? When it has happened, is there a possibility to restore the current file in the correct encoding instead of pulling an older version from source control? As a last note: the problem is with one single project file, all other project files don't expose this problem.

    Read the article

  • Variable Assignment and loops (Java)

    - by Raven Dreamer
    Greetings Stack Overflowers, A while back, I was working on a program that hashed values into a hashtable (I don't remember the specifics, and the specifics themselves are irrelevant to the question at hand). Anyway, I had the following code as part of a "recordInput" method. tempElement = new hashElement(someInt); while(in.hasNext() == true) { int firstVal = in.nextInt(); if (firstVal == -911) { break; } tempElement.setKeyValue(firstVal, 0); for(int i = 1; i<numKeyValues;i++) { tempElement.setKeyValue(in.nextInt(), i); } elementArray[placeValue] = tempElement; placeValue++; } // close while loop } // close method This part of the code was giving me a very nasty bug -- no matter how I finagled it, no matter what input I gave the program, it would always produce an array full of only a single value -- the last one. The problem, as I later determined it, was that because I had not created the tempElement variable within the loop, and because values were not being assigned to elementArray[] until after the loop had ended -- every term was defined rather as "tempElement" -- when the loop terminated, every slot in the array was filled with the last value tempElement had taken. I was able to fix this bug by moving the declaration of tempElement within the while loop. My question to you, Stackoverflow, is whether there is another (read: better) way to avoid this bug while keeping the variable declaration of tempElement outside the while loop. (suggestions for better title and tags also appreciated)

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • Active Record two belongs_to calls to the same model

    - by ethyreal
    In linking a sports event to two teams, at first this seemed to make since: events - id:integer - integer:home_team_id - integer:away_team_id teams - integer:id - string:name However I am troubled by how I would link that up in the active record model: class Event belongs_to :home_team, :class_name => 'Team', :foreign_key => "home_team_id" belongs_to :away_team, :class_name => 'Team', :foreign_key => "away_team_id" end Is that the best solution? In an answer to a similar question I was pointed to single table inheritance, and then later found polymorphic associations. Neither of which seemed to fit this association. Perhaps I am looking at this wrong, but I see no need to subclass a team into home and away teams since the distinction is only in where the game is played. I thought also about a has_many through association but that seems two much as I will only ever need two teams, but those two teams don't belong to any one event. event_teams - integer:event_id - integer:team_id - boolean:is_home Is there a cleaner more semantic way for making these associations in active record? Thanks

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >