Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 566/708 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • WPF: How do I get a reference to a styled window control in code behind?

    - by Brad
    I have a window defined with a style: <Window x:Class="winBorderless" x:Name="winBorderless" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Local="clr-namespace:WindowStyle" Style="{StaticResource Window_Cartesia}" WindowStartupLocation="CenterScreen" BorderThickness="1" BorderBrush="#FF9CAAC1" Margin="5" Title="[Document Title]"> and the style defined in an application level dictionary: <Style x:Key="Window_Cartesia" TargetType="{x:Type Window}"> <Setter Property="WindowStyle" Value="None"/> <Setter Property="AllowsTransparency" Value="True"/> <Setter Property="Background" Value="Transparent"/> <EventSetter Event="Loaded" Handler="Loaded"/> <EventSetter Event="PreviewKeyDown" Handler="Preview_KeyDown"/> <EventSetter Event="MouseMove" Handler="FullScreen_MouseMove"/> <Setter Property="Template"> In code behind I have a reference to the Window instance set: Win = DirectCast(sender, winBorderless) This allows access to the window properties as the EventSetters pass references to the various controls. However, it doesn't provide for access to the controls defined in the style through the window reference as they don't exist there. So, what is the best way to reference a control through code behind that is defined in the style. I'd prefer not to iterate the trees to find them but ya gotta do....

    Read the article

  • What are best practices for managing related Cabal packages?

    - by Norman Ramsey
    I'm working on a dataflow-based optimization library written in Haskell. It now seems likely that the library is going to have to be split into two pieces: A core piece with minimal build dependencies; call it hoopl-core. A full piece, call it hoopl, which may have extra dependencies on packages like a prettyprinter, QuickCheck, and so on. The idea is that the Glasgow Haskell Compiler will depend only on hoopl-core, so that it won't be too difficult to bootstrap the compiler. Other compilers will get the extra goodies in hoopl. Package hoopl will depend on hoopl-core. The Debian package tools can build multiple packages from a single source tree. Unfortunately Cabal has not yet reached that level of sophistication. But there must be other library or application designers out there who have similar issues (e.g., one package for a core library, another for a command-line interface, another for a GUI interface). What are current best practices for building and managing multiple related Haskell packages using Cabal?

    Read the article

  • MSBuild "Wrapper" fails while VS2010 "Pure" compile succeeds for MFC application in CruiseControl.NE

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (devenv) works, but a wrapper MSBuild script run via the CCNet MSBuild task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level.

    Read the article

  • How do I add dynamic htmlAttributes to htmlhelper ActionLinks?

    - by camainc
    In my master page I have a top-level menu that is created using ActionLinks: <ul id="topNav"> <li><%=Html.ActionLink("Home", "Index", "Home")%></li> <li><%=Html.ActionLink("News", "Index", "News")%></li> <li><%=Html.ActionLink("Projects", "Index", "Projects")%></li> <li><%=Html.ActionLink("About", "About", "Home")%></li> <li><%=Html.ActionLink("Contact", "Contact", "Home")%></li> <li><%=Html.ActionLink("Photos", "Photos", "Photos")%></li> </ul> I want to dynamically add a class named "current" to the link that the site is currently pointing to. So, for example, when the site is sitting at the home page, the menu link would render like this: <li><a class="current" href="/">Home</a></li> Do I have to overload the ActionLink method to do this, or create an entirely new HtmlHelper, or is there a better way? I'm fairly new to MVC, so I'm not sure what is the correct way to go about this. Thanks in advance.

    Read the article

  • Converting an input text value to a decimal number

    - by vitto
    Hi, I'm trying to work with decimal data in my PHP and MySql practice and I'm not sure about how can I do for an acceptable level af accuracy. I've wrote a simple function which recives my input text value and converts it to a decimal number ready to be stored in the database. <?php function unit ($value, $decimal_point = 2) { return number_format (str_replace (",", ".", strip_tags (trim ($value))), $decimal_point); } ?> I've resolved something like AbdlBsF5%?nl with some jQuery code for replace and some regex to keep only numbers, dots and commas. In some country, people uses the comma , to send decimal numbers, so a number like 72.08 is wrote like 72,08. I'd like avoid to forcing people to change their usual chars and I've decided to use a jQuery to keep this too. Now every web developer knows the last validation must be handled by the dynamic page for security reasons. So my answer is should I use something like unit (); function to store data or shoul I also check if users inserts invalid chars like letters or something else? If I try this and send letters, the query works without save the invalid data, I think this isn't bad, but I could easily be wrong because I'm a rookie. What kind of method should I use if I want a number like 99999.99 for my query?

    Read the article

  • How does VS 2005 provide history across all TFS Team Projects when tf.exe cannot?

    - by AakashM
    In Visual Studio 2005, in the TFS Source Control Explorer, these is a top-level node for the TFS Server itself, with a child node for each Team Project. Right-clicking either the server node or the node for a Team Project gives a context menu on which there is a View History item. Selecting this gives you a History window showing the last 200 or so changesets, either for the specific Team Project chosen, or across all Team Projects. It is this history across all Team Projects that I am wondering about. The command-line tf.exe history command provides (as I understand it) basically the same functionality as is provided by the VS TFS Source Control plug-in. But I cannot work out how to get tf.exe history to provide this across-all-Team-Projects history. At a command line, supposing I have C:\ mapped as the root of my workspace, and Foo, Bar, and Baz as Team Projects, I can do C:\> tf history Foo /recursive /stopafter:200 to get the last 200 changesets that affected Team Project Foo; or from within a Team Project folder C:\Bar> tf history *.* /recursive /stopafter:200 which does the same thing for Team Project Bar - note that the wildcard *.* is allowed here. However, none of these work (each gives the error message shown): C:\> tf history /recursive /stopafter:200 The history command takes exactly one item C:\> tf history *.* /recursive /stopafter:200 Unable to determine the source control server C:\> tf history *.* /server:servername /recursive /stopafter:200 Unable to determine the workspace I don't see an option in the docs for tf for specifying a workspace; it seems to only want to determine it from the current folder. So what is VS 2005 doing? Is it internally doing a history on each Team Project in turn and then sticking the results together?? note also that I have tried with Power Tools; tfpt history from the command line gives exactly the same error messages seen here

    Read the article

  • Why does my Workflow Service (4.0) variable go null in a DoWhile Activity?

    - by jlafay
    I have a WF service that I'm trying to setup receive activities to "Subscribe" and "Unsubscribe". I'm using This WF Durable Duplex Tutorial as a basis because my service performs callbacks to clients. Basically, think of it as a chat service. I can make client calls to the two receive activities just fine. What happens is the callback address of the client is passed in to Subscribe() on the service. The address is stored as a variable in the WF service and everything looks like it would work as to be expected. When a client calls Unsubscribe(), my watch I have set on the address var during debugging shows it as null. So what gives? Here's the basic setup of my WF service layout... Everything is enveloped in a DoWhile activity. Inside of that is a Pick activity and two Pick branches. The first branch is for subscribing activities. It has a receive-sendreply activity that assigns the string passed by the client to the WF address var. The second branch handles unsubscribing. The trigger is the Request activity and the client address is again passed in. From there it goes into a sequence, starting with an If. It checks to see if the unsubscribeAddress equals the address already subscribed. If it does, then it sets the address to String.Empty and sends a success message back to the client. Why would a variable that's scoped to the enveloping DoWhile activity be implicitly assigned to null? I'm trying to get this to work so I can implement multiple client subscribers from there and work on triggers that invoke callbacks to multiple clients. CONCAT EDIT: I set a breakpoint at the DoWhile level and my var is null once Unsubscribe() is called. When Subscribe() is invoked, the watch shows a value in the var all the way through. Until I Unsubscribe() with a client. Should I be using a While Activity instead?

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • https not redirecting to mongrel upstream

    - by kip
    Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page". http { # passenger_root /opt/passenger-2.2.11; # passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; upstream mongrel { server 00.000.000.000:8000; server 00.000.000.000:8001; } server { listen 443; server_name domain.com; ssl on; ssl_certificate /etc/ssl/localcerts/domain_combined.crt; ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; location / { root /current/public/; index index.html index.htm; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mongrel; } } }

    Read the article

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

  • Getting SSL to work with Apache/Passenger on OSX

    - by jonnii
    I use apache/passenger on my development machine, but need to add SSL support (something which isn't exposed through the control panel). I've done this before in production, but for some reason I can't seem to get it work on OSX. The steps I've followed so far are from a default apache osx install: Install passenger and passenger preference pane. Add my rails app (this works) Create my ca.key, server.crt and server.key as detailed on the apple website. At this point I need to start editing the apache configs, so I added: # Apache knows to listen on port 443 for ssl requests. Listen 443 Listen 80 I thought I'd try editing the passenger pref pane generated config first to get everything working, when I add: It starts off looking like this <VirtualHost *:80> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <Directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </Directory> </VirtualHost> I then append this: <VirtualHost *:443> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </directory> # SSL Configuration SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP SSLOptions +FakeBasicAuth +ExportCertData +StdEnvVars +StrictRequire #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl.key/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl.key/server.key SSLCertificateChainFile /private/etc/apache2/ssl.key/ca.crt SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 </VirtualHost> The files referenced all exist (I doubled checked that), but now when I restart my apache I can't even get to myapp.local. However apache can still server the default page when I click on it in the sharing preference panel. Any help would be greatly appreciated.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • Cooperative/Non-preemptive threading avoiding threadlooks?

    - by Wayne
    Any creative ideas to avoid deadlocks on a yield or sleep with cooperative/non-preemptive multitasking without doing an O/S Thread.Sleep(10)? Typically the yield or sleep call will call back into the scheduler to run other tasks. But this can sometime produce deadlocks. Some background: This application has enormous need for speed and, so far, it's extremely fast as compared to other systems in the same industry. One of the speed techniques is cooperative/non-preemptive threading rather then the cost of a context switch from O/S threads. The high level design a priority manager which calls out to tasks depending on priority and processing time. Each task does one "iteration" of work and returns to wait its turn again in the priority queue. The tricky thing with non-preemptive threading is what to do when you want to a particular task to stop in the middle of work and wait for some other event from a different task before continuing. In this case, we have 3 tasks, A B and C where A is a controller that must synchronize the activity of B and C. First, A starts both B and C. Then B yields so C gets invoked. When C yields, A sees they are both inactive, decides it's time for B to run but not time for C yet. Well B is now stuck in a yield that has called C, so it can never run. Sincerely, Wayne

    Read the article

  • Organizing code, logical layout of segmented files

    - by David H
    I have known enough about programming to get me in trouble for about 10 years now. I have no formal education, though I've read many books on the subject for various languages. The language I am primarily focused on now would be php, atleast for the scale of things I am doing now. I have used some OOP classes for a while, but never took the dive into understanding principals behind the scenes. I am still not at the level I would like to be expression-wise...however my recent reading into a book titled The OOP Thought Process has me wanting to advance my programming skills. With motivation from the new concepts, I have started with a new project that I've coded some re-usable classes that deal with user auth, user profiles, database interfacing, and some other stuff I use regularly on most projects. Now having split my typical garbled spaghetti bowl mess of code into somewhat organized files, I've come into some problems when it comes to making sure files are all included when they need to be, and how to logically divide the scripts up into classes, aswell as how segmented I should be making each class. I guess I have rambled on enough about much of nothing, but what I am really asking for is advise from people, or suggested reading that focuses not on specific functions and formats of code, but the logical layout of projects that are larger than just a hobby project. I want to learn how to do things proper, and while I am still learning in some areas, this is something that I have no clue about other than just being creative, and trial/error. Mostly error. Thanks for any replies. This place is great.

    Read the article

  • Invalid iPhone Application Binary

    - by Kristopher Johnson
    I'm trying to upload an application to the iPhone App Store, but I get this error message from iTunes Connect: The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate. My guess is that it is not properly signed. I have downloaded my App Store distribution certficate, but I can't figure out how to "sign" my application with it. The SDK's documentation about code signing is not very helpful. (FWIW, I can install the app on my iPhone just fine using the development provisioning profile.) However, it is possible that I screwed things up on a more basic level. Here's what I did to try to prepare it for upload: In Xcode, select the Device|Release target Select the target and click the Info button. Change "Code Signing Identity" to "iPhone Distribution", and change "Code Signing Provisioning Profile" to my App Store distribution profile. Build Go to the directory where the built MyApp.app bundle is, control-click and choose "Compress" to create MyApp.zip Upload MyApp.zip to the App Store via iTunes Connect (which resulted in the above error message). Can anybody give me any hints? Edit: Found someone with the same problem. Unfortunately, he won't tell us how he fixed it. http://www.rhonabwy.com/wp/2008/07/18/seattlebus-diary-ongoing-update-saga/#comments http://www.rhonabwy.com/wp/2008/07/22/seattlebus-diary-update-is-pending-review/ (Note: For general information on submitting iPhone applications to the App Store, see Steps to upload an iPhone application to the AppStore.)

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Can't create an OgreBullet Trimesh

    - by Nathan Baggs
    I'm using Ogre and Bullet for a project and I currently have a first person camera set up with a Capsule Collision Shape. I've created a model of a cave (which will serve as the main part of the level) and imported it into my game. I'm now trying to create an OgreBulletCollisions::TriangleMeshCollisionShape of the cave. The code I've got so far is this but it isn't working. It compiles but the Capsule shape passes straight through the cave shape. Also I have debug outlines on and there are none being drawn around the cave mesh. Entity *cave = mSceneMgr->createEntity("Cave", "pCube1.mesh"); SceneNode *caveNode = mSceneMgr->getRootSceneNode()->createChildSceneNode(); caveNode->setPosition(0, 10, 250); caveNode->setScale(10, 10, 10); caveNode->rotate(Quaternion(0.5, 0.5, -0.5, 0.5)); caveNode->attachObject(cave); OgreBulletCollisions::StaticMeshToShapeConverter *smtsc = new OgreBulletCollisions::StaticMeshToShapeConverter(); smtsc->addEntity(cave); OgreBulletCollisions::TriangleMeshCollisionShape *tri = smtsc->createTrimesh(); OgreBulletDynamics::RigidBody *caveBody = new OgreBulletDynamics::RigidBody("cave", mWorld); caveBody->setStaticShape(tri, 0.1, 0.8); mShapes.push_back(tri); mBodies.push_back(caveBody); Any suggestions are welcome. To clarify. It compiles but the Capsule shape passes straight through the cave shape. Also I have debug outlines on and there are none being drawn around the cave mesh

    Read the article

  • jquery Tab group IDs

    - by mare
    I'm having an issue with jQuery UI Tabs script which does not pick up tabs that have a dot "." in their name (ID). For instance like this: <script type="text/javascript"> $(function () { $("#tabgroup\\.services").tabs(); }); </script> <div id="tabgroup.Services"> <ul> <li><a href="#tab.service1"> Service 1 title</a></li> <li><a href="#tab.service2"> Service 2 title</a></li> </ul> <div id="tab.service1"> <p>content</p> </div> <div id="tab.service2"> <p>content</p> </div> </div> The problem is because to select an element with a dot in its name, you need to use escapes (like when I initialize the tabs on my tabgroup). And apparently the Tabs JS implementation does not do that. Although I can do it at the tab group level, I cannot do it lower down because that's implemented in the Tabs JS file and I would not want to modify it (if possible).

    Read the article

  • Application connection with database persist after sucessfull transaction also.

    - by anupam3m
    Hi , I am using Spring.Data.NHibernate12 on my database level.my application connection with database is not getting released. Underneath given is Dataconfiguration.xml < ?xml version="1.0" encoding="utf-8" ? < objects xmlns="http://www.springframework.net" xmlns:db="http://www.springframework.net/database" < object id="AuditLogger" type="Risco.Rsp.Ac.Audit.AuditLogger, Risco.Rsp.Ac.Audit" singleton="false" < property name="CacheSettings" ref="CacheSettings"/ < /object < object id="CacheSettings" type="Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities.UpdateEntityCacheHelper, Risco.Rsp.Ac.AMAC.CacheMgmt.Utilities" singleton="false"/ < object type="Spring.Objects.Factory.Config.PropertyPlaceholderConfigurer, Spring.Core" < <property name="ConfigSections" value="databaseSettings"/> < < db:provider id="AMACDbProvider" provider="OracleClient-2.0" connectionString="Data Source=RISCODEVDB;User ID=amsbvt; Password=amsuser1234;"/ Risco.Rsp.Ac.AMAC.Mapping Risco.Rsp.Ac.Logging.Appenders Risco.Rsp.Ac.AMAC.CacheMappings --

    Read the article

  • Understanding OOP Principles in passing around objects/values

    - by Hans
    I'm not quite grokking a couple of things in OOP and I'm going to use a fictional understanding of SO to see if I can get help understand. So, on this page we have a question. You can comment on the question. There are also answers. You can comment on the answers. Question - comment - comment - comment Answer -comment Answer -comment -comment -comment Answer -comment -comment So, I'm imagining a very high level understanding of this type of system (in PHP, not .Net as I am not yet familiar with .Net) would be like: $question = new Question; $question->load($this_question_id); // from the URL probably echo $question->getTitle(); To load the answers, I imagine it's something like this ("A"): $answers = new Answers; $answers->loadFromQuestion($question->getID()); // or $answers->loadFromQuestion($this_question_id); while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } Or, would you do ("B"): $answers->setQuestion($question); // inject the whole obj, so we have access to all the data and public methods in $question $answers->loadFromQuestion(); // the ID would be found via $this->question->getID() instead of from the argument passed in while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } I guess my problem is, I don't know when or if I should be passing in an entire object, and when I should just be passing in a value. Passing in the entire object gives me a lot of flexibility, but it's more memory and subject to change, I'd guess (like a property or method rename). If "A" style is better, why not just use a function? OOP seems pointless here. Thanks, Hans

    Read the article

  • How to properly combine two files in XAML in Microsoft Blend?

    - by MartyIX
    Hello, I have a test project with the file MainWindow.xaml with the content: <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" xmlns:view="clr-namespace:Sokoban.View;assembly=Solvers" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <ad:DockingManager x:Name="dockingManager"> <ad:ResizingPanel Orientation="Vertical"> <view:Solvers x:Name="solvers" diag:PresentationTraceSources.TraceLevel="High" /> <!-- LINE BELOW DEMONSTRATES WORKING CODE INSTEAD OF LINE ABOVE --> <!--<ad:DocumentPane Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ad:DockableContent x:Name="classesContent" Title="Classes"> <TextBlock>test</TextBlock> </ad:DockableContent> </ad:DocumentPane>--> </ad:ResizingPanel> </ad:DockingManager> </Window> and in another project I have the file Solvers.xaml: <ad:DocumentPane x:Class="Sokoban.View.Solvers" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> </ad:DocumentPane> When I open my Visual Studio solution in Microsoft Blend 4 then I see the error: InvalidOperationException: DocumentPane must be put under a DockingManager! when I open either MainWindow.xaml or Solvers.xaml. It is all right in Solvers.xaml because there really is no DockingManager but MainWindow.xaml should work, shouldn't it? How to solve the problem? Note: It seems to me that the files are processed separately and because the file Solvers.xaml contains the error the MainWindow.xaml file also contains the very same error. Note 2: XAML files use AvalonDock library Is there a way how to say that Solvers.xaml is only an extension of another file? Thank you for any help!

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >