Search Results

Search found 10852 results on 435 pages for 'tool'.

Page 408/435 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Delphi TBytesField - How to see the text properly - Source is HIT OLEDB AS400

    - by myitanalyst
    We are connecting to a multi-member AS400 iSeries table via HIT OLEDB and HIT ODBC. You connect to this table via an alias to access a specific multi-member. We create the alias on the AS400 this way: CREATE ALIAS aliasname FOR table(membername) We can then query each member of the table this way: SELECT * FROM aliasname We are testing this in Delphi6 first, but will move it to D2010 later We are using HIT OLEDB for the AS400. We are pulling down records from a table and the field is being seen as a tBytesField. I have also tried ODBC driver and it sees as tBytesField as well. Directly on the AS400 I can query the data and see readable text. I can use the iSeries Navigation tool and see readable text as well. However when I bring it down to the Delphi client via the HIT OLEDB or HIT ODBC and try to view via asString then I just see unreadable text.. something like this: ñðð@ðõñððððñ÷@õôððõñòøóóöøñðÂÁÕÒ@ÖÆ@ÁÔÅÙÉÃÁ@@@@@@@@ÂÈÙÉâãæÁðòñè@ÔK@k@ÉÕÃK@@@@@@@@@ç I jumbled up the text above, but that is the character types that show up. When I did a test in D2010 the text looks like japanse or chinese characters, but if I display as AnsiString then it looks like what it does in Delphi 6. I am thinking this may have something to do with code pages or character sets, but I have no experience in this are so it is new to me if it is related. When I look at the Coded Character Set on the AS400 it is set to 65535. What do I need to do to make this text readable? We do have a third party component (Delphi400) that makes things behave in a more native AS400 manner. When I use its AS400 connection and AS400 query components it shows the field as a tStringField and displays just fine. BUT we are phasing out this product (for a number of reasons) and would really like the OLEDB with the ADO components work. Just for clarification the HIT OLEDB with tADOQuery do have some fields showing as tStringFields for many of the other tables we use... not sure why it is showing as a tBytesField in this case. I am not an AS400 expert, but looking at the field definititions on the AS400 the ones showing up as tBytesField look the same as the ones showing up as tStringFields... but there must be a difference. Maybe due to being a multi-member? So... does anyone have any guidance on how to get the correct string data that is readable? If you need more info please ask. Greg

    Read the article

  • What are the reasons why the CPU usage doesn’t go 100% with C# and APM?

    - by Martin
    I have an application which is CPU intensive. When the data is processed on a single thread, the CPU usage goes to 100% for many minutes. So the performance of the application appears to be bound by the CPU. I have multithreaded the logic of the application, which result in an increase of the overall performance. However, the CPU usage hardly goes above 30%-50%. I would expect the CPU (and the many cores) to go to 100% since I process many set of data at the same time. Below is a simplified example of the logic I use to start the threads. When I run this example, the CPU goes to 100% (on an 8/16 cores machine). However, my application which uses the same pattern doesn’t. public class DataExecutionContext { public int Counter { get; set; } // Arrays of data } static void Main(string[] args) { // Load data from the database into the context var contexts = new List<DataExecutionContext>(100); for (int i = 0; i < 100; i++) { contexts.Add(new DataExecutionContext()); } // Data loaded. Start to process. var latch = new CountdownEvent(contexts.Count); var processData = new Action<DataExecutionContext>(c => { // The thread doesn't access data from a DB, file, // network, etc. It reads and write data in RAM only // (in its context). for (int i = 0; i < 100000000; i++) c.Counter++; }); foreach (var context in contexts) { processData.BeginInvoke(context, new AsyncCallback(ar => { latch.Signal(); }), null); } latch.Wait(); } I have reduced the number of locks to the strict minimum (only the latch is locking). The best way I found was to create a context in which a thread can read/write in memory. Contexts are not shared among other threads. The threads can’t access the database, files or network. In other words, I profiled my application and I didn’t find any bottleneck. Why the CPU usage of my application doesn’t go about 50%? Is it the pattern I use? Should I create my own thread instead of using the .Net thread pool? Is there any gotchas? Is there any tool that you could recommend me to find my issue? Thanks!

    Read the article

  • Installing Fabric On Windows (Error No Module Called Readline)

    - by Jon
    I'm trying to use the Fabric 0.1.1 deploy tool (http://docs.fabfile.org/) on Windows and we're running into an issue with the readline module. I've been through various threads but can't seem to solve the issue. It's important because we can't deploy applications from Windows based machines. C:\Documents and Settings\dev\Desktop\deploy>fab Traceback (most recent call last): File "C:\python\Scripts\fab-script.py", line 8, in <module> load_entry_point('fabric==0.1.1', 'console_scripts', 'fab')() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 277, in load_entry_point File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 2180, in load_entry_point File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 1913, in load File "build\bdist.win32\egg\fabric.py", line 25, in <module> **ImportError: No module named readline** Installing the module results in: **easy_install readline** Searching for readline Reading http://pypi.python.org/simple/readline/ Reading http://www.python.org/ Best match: readline 2.6.4 Downloading http://pypi.python.org/packages/source/r/readline/readline-2.6.4.tar .gz#md5=7568e8b78f383443ba57c9afec6f4285 Processing readline-2.6.4.tar.gz Running readline-2.6.4\setup.py -q bdist_egg --dist-dir c:\docume~1\ji81b9~1.che \locals~1\temp\easy_install-pzkz1a\readline-2.6.4\egg-dist-tmp-szs2ps Traceback (most recent call last): File "C:\python\Scripts\easy_install-script.py", line 8, in <module> load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1671, in main File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1659, in with_ei_usage File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1675, in <lambda> File "c:\python\lib\distutils\core.py", line 152, in setup dist.run_commands() File "c:\python\lib\distutils\dist.py", line 975, in run_commands self.run_command(cmd) File "c:\python\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 211, in run File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 446, in easy_install File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 476, in install_item File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 655, in install_eggs File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 930, in build_and_install File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 919, in run_setup File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 27, in run_setup File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 63, in run File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 29, in <lambda> File "setup.py", line 93, in <module> AttributeError: 'module' object has no attribute 'symlink' Has anybody solved this issue or can anybody suggest a workaround?

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • QueryInterface fails at casting inside COM-interface implementation

    - by brecht
    I am creating a tool in c# to retrieve messages of a CAN-network (network in a car) using an Dll written in C/C++. This dll is usable as a COM-interface. My c#-formclass implements one of these COM-interfaces. And other variables are instantiated using these COM-interfaces (everything works perfect). The problem: The interface my C#-form implements has 3 abstract functions. One of these functions is called -by the dll- and i need to implement it myself. In this function i wish to retrieve a property of a form-wide variable that is of a COM-type. The COM library is CANSUPPORTLib The form-wide variable: private CANSUPPORTLib.ICanIOEx devices = new CANSUPPORTLib.CanIO(); This variable is also form-wide and is retrieved via the devices-variable: canreceiver = (CANSUPPORTLib.IDirectCAN2)devices.get_DirectDispatch(receiverLogicalChannel); The function that is called by the dll and implemented in c# public void Message(double dTimeStamp) { Console.WriteLine("!!! message ontvangen !!!" + Environment.NewLine); try { CANSUPPORTLib.can_msg_tag message = new CANSUPPORTLib.can_msg_tag(); message = (CANSUPPORTLib.can_msg_tag) System.Runtime.InteropServices.Marshal.PtrToStructure(canreceiver.RawMessage, message.GetType()); for (int i = 0; i < message.data.Length; i++) { Console.WriteLine("byte " + i + ": " + message.data[i]); } } catch (Exception e) { Console.WriteLine(e.Message); } } The error rises at this line: message = (CANSUPPORTLib.can_msg_tag)System.Runtime.InteropServices.Marshal.PtrToStructure(canreceiver.RawMessage, message.GetType()); Error: Unable to cast COM object of type 'System.__ComObject' to interface type 'CANSUPPORTLib.IDirectCAN2'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{33373EFC-DB42-48C4-A719-3730B7F228B5}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Notes: It is possible to have a timer-clock that checks every 100ms for the message i need. The message is then retrieved in the exact same way as i do now. This timer is started when the form starts. The checking is only done when Message(double) has put a variable to true (a message arrived). When the timer-clock is started in the Message function, i have the same error as above Starting another thread when the form starts, is also not possible. Is there someone with experience with COM-interop ? When this timer

    Read the article

  • ASP.net Ajax tab container not appearing

    - by Eyla
    I created new web project using VS 2008 with enabled Ajax template with C# and Framework 3.5. I added Ajax reference to the project and I can see all Ajax toolkit in my tool box. The problem that when I add tab container with Tab Panels then run the projects nothing appear on the browser and I tried few browsers. I'm including my code and I wish that someone would help me. Regards, My Code: ................................................................ <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="Contacts._Default" %> <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div> <asp:TabContainer ID="TabContainer1" runat="server" ActiveTabIndex="0"> <asp:TabPanel runat="server" HeaderText="TabPanel1" ID="TabPanel1"> <ContentTemplate> tab 1 </ContentTemplate> </asp:TabPanel> <asp:TabPanel runat="server" HeaderText="TabPanel2" ID="TabPanel2"> <ContentTemplate> tab 2 </ContentTemplate> </asp:TabPanel> <asp:TabPanel runat="server" HeaderText="TabPanel3" ID="TabPanel3"> <ContentTemplate> tab 3 </ContentTemplate> </asp:TabPanel> </asp:TabContainer> </div> </form> </body> </html>

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • activeX component in axapta

    - by Nico
    hi folks, i'm struggling with an .net activeX i try to use in ms axapta 2009. using this component on my local machine where it was compiled, it's working quite fine. it can be added as activeX element on a form, the methods and events are listed in the axapta-activeX-explorer and i can interact with it without any problems. but trying to distribute the dll to other clients isn't working as intended. the registration of the dll via regasm /codebase /tlb works properly - getting the message, registration was successful. the component is also listed when selecting an activeX-element to add in ax, but neither functions nor properties are listed. and launching the form results in an errormessage - activeX component CLSID ... not found on system, not installed. the classID is indeed the one, defined in .net. strange things happen, having a look on the task-manager. the activeX-component itself is just a wrapper to interact with a com-application. when launching the ax-form with the not working and _not_installed_!! activeX-thing, the taskmanager shows a new process of the com-application, which is instanciated by the activeX :/ things i tried: using different versions of regasm, eg \Windows\Microsoft.NET\Framework\v2.0.50727 ; C:\Windows\Microsoft.NET\Framework64\v2.0.50727 using new GUIDs in .net, prior removing the old ones from the registry compiling, using different versions of the .net framework doing registration via regasm, regasm /codebase, regasm /codebase /tlb, using a visual-studio-setup running registration via command-line as administrator running setup as administrator running even ax as administrator on client-machine moving dll to a different folder followed by new registration ( windows/system32; ax/client/bin ) installing to GAC ( gacutil /i ) different project-options in visual studio ( COM-Visibility; register for COM-Interop; different targetPlatform ) hoped for the fact, that compiling in visual studio with register for COM-Interop option enabled does something more than just the regasm-registration, i used a registry-monitor-microsoft-tool for logging the registry-activity which happend during compilation. using these logs to create all registry-entries on the target-client in addition didn't work either. any hints or help would be so much appreciated! this thing is blocking me for days now :(

    Read the article

  • Connection Error using NHibernate 3.0 with Oracle

    - by Olu Lawrence
    I'm new to NHibernate. My first attempt is to configure and establish connection to Oracle 11.1g using ODP. For this test, I use a test fixture, but I get the following error: Inner exception: "Object reference not set to an instance of an object." Outer exception: Could not create the driver from NHibernate.Driver.OracleDataClientDriver. The test script is shown below: using IBCService.Models; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; using NUnit.Framework; namespace IBCService.Tests { [TestFixture] public class GenerateSchema_Fixture { [Test] public void Can_generate_schema() { var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly(typeof(Product).Assembly); var fac = new SchemaExport(cfg); fac.Execute(false, true, false); } } } The exception occurs at the last line: fac.Execute(false, true, false); The NHibernate config is shown: <?xml version="1.0" encoding="utf-8"?> <!-- This config use Oracle Data Provider (ODP.NET) --> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2" > <session-factory name="IBCService.Tests"> <property name="connection.driver_class"> NHibernate.Driver.OracleDataClientDriver </property> <property name="connection.connection_string"> User ID=TEST;Password=test;Data Source=//RAND23:1521/RAND.PREVALENT.COM </property> <property name="connection.provider"> NHibernate.Connection.DriverConnectionProvider </property> <property name="show_sql">false</property> <property name="dialect">NHibernate.Dialect.Oracle10gDialect</property> <property name="query.substitutions"> true 1, false 0, yes 'Y', no 'N' </property> <property name="proxyfactory.factory_class"> NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu </property> </session-factory> </hibernate-configuration> Now, if I change the NHibernate.Driver.OracleDataClientDriver to NHibernate.Driver.OracleClientDriver (Microsoft provider for Oracle), the test succeed. Once switched back to Oracle provider, whichever version, the test fails with the error stated earlier. I've spent 3 days already trying to figure out what is not in order without success. I hope someone out there could provide useful info on what I am doing wrong.

    Read the article

  • javascript problem on tab control

    - by mahfuz
    my tab control have four itmes. Each tab item are individual usercontrol and they have javascript method.*But always tab item1 index=0 works well*,rest of item javascript don't work.But individually they work well ,problem arise when i put them in tab control.*when i click rest of tab items javascript function Tab item1 index=0 ascx* page always load . server side event C# code works well for all tab items.....How to solve this problem .....What's the problem is? i use devespress tool ..... <table> <tr> <td> <dxtc:ASPxPageControl Width="500px" ID="ASPxPageControl1" runat="server" ActiveTabIndex="0" EnableCallbackCompression="True" EnableHierarchyRecreation="True" AutoPostBack="True"> <TabPages> <dxtc:TabPage Text="Charge Company"> <ContentCollection> <dxw:ContentControl ID="ContentControl1" runat="server"> <uc1:UCConfig_Charge_Company_Wise ID="UCConfig_Charge_Company_Wise" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> <dxtc:TabPage Text="Charge Depository Company"> <ContentCollection> <dxw:ContentControl ID="ContentControl3" runat="server"> <uc2:UCConfig_Charge_Depository_Company_Wise ID="UCConfig_Charge_Depository_Company_Wise" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> <dxtc:TabPage Text="Investor Charge "> <ContentCollection> <dxw:ContentControl ID="ContentControl5" runat="server"> <uc4:UCConfig_Investor_Account_Wise_Charge ID="UCConfig_Investor_Account_Wise_Charge" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> <dxtc:TabPage Text="Charge Operation Mode "> <ContentCollection> <dxw:ContentControl ID="ContentControl4" runat="server"> <uc3:UCconfig_charge_operation_mode ID="UCconfig_charge_operation_mode" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> </TabPages> </dxtc:ASPxPageControl> </td> </tr> </table> my first tab control item works well.but rest of them create problem.i need to call javascript all of them but calling javascript from others create problem ,they show me bellow error Microsoft JScript runtime error: 'this.GetStateInput().value' is null or not an object

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • How to construct a flowchart/storyboard in a Func Spec

    - by PeterQ
    Hey I'm a bit embarrassed to write a post on this topic, but I would appreciate the help. At my school, the CS kids (myself included) have created a nice, little program that is built for incoming Chem/Bio students. It consists of several modules that reviews topics they should have a firm grasp on before they start their classes. It's a nice tool since it cuts down on reviewing the material in class but also allows the students to do a quick diagnostic to fix any problems. Now, I'm in charge of constructing a simple interface that reports on the progress of the group and individual students. The interface is pretty simple. It's just a window that pops up, and it has three tabs: the first tab is a "cumulative" report of all of students. The secnod tab has a drop down box that lists the students and once a student is selected, a report for him/her comes up. And the third tab is simply a description of all of the terms used in the 1st and 2nd tabs. Now, I'm trying to be a good CS student and write a func. spec for my interface. My problem comes with the fact that I'd like to insert a little flowchart using Visio. Problem is, and I'm quite embarrassed to admit this, I don't know how to construct the flowchart/storyboard. For instance, I know I start with a "Start/Click Icon" in a rectangle. Then where do I go? Do I draw three arrrows (one going to each tab) and then describing what goes on? In tab one, the only thing that happens is that the user will select a "sort" method in the drop down box. This will sort the list. The End. Similarly, if the user selects the second tab, then he will go to a drop down box with the student names. Selecting a name will bring up student info. And the third tab is just a list of unfamiliar terms coming from the first or second tab. I wanted to storyboard/flowchart what I'm doing, but I'm unclear how to go about it. Any help would be appreciated! (Just to clarify, I'm not having trouble with using Visio, but I don't know how one goes about construct a storyboard or determining the procedure for constructing one)

    Read the article

  • Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in

    - by Steve Johnson
    hi all, hope that everybody here is OK. We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio. I am trying to automate build for Web Application built using C# and VB.Net. Our scenario is such that we have a central database to which our web application connects. Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds. The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server. I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two. Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed. If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix. Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008. Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application. Also default data can't be generated by this process. To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client. Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases? Thank you all. Regards Steve

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • SubSonic 2.x now supports TVP's - SqlDbType.Structure / DataTables for SQL Server 2008

    - by ElHaix
    For those interested, I have now modified the SubSonic 2.x code to recognize and support DataTable parameter types. You can read more about SQL Server 2008 features here: http://download.microsoft.com/download/4/9/0/4906f81b-eb1a-49c3-bb05-ff3bcbb5d5ae/SQL%20SERVER%202008-RDBMS/T-SQL%20Enhancements%20with%20SQL%20Server%202008%20-%20Praveen%20Srivatsav.pdf What this enhancement will now allow you to do is to create a partial StoredProcedures.cs class, with a method that overrides the stored procedure wrapper method. A bit about good form: My DAL has no direct table access, and my DB only has execute permissions for that user to my sprocs. As such, SubSonic only generates the AllStructs and StoredProcedures classes. The SPROC: ALTER PROCEDURE [dbo].[testInsertToTestTVP] @UserDetails TestTVP READONLY, @Result INT OUT AS BEGIN SET NOCOUNT ON; SET @Result = -1 --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] ON INSERT INTO [dbo].[tbl_TestTVP] ( [GroupInsertID], [FirstName], [LastName] ) SELECT [GroupInsertID], [FirstName], [LastName] FROM @UserDetails IF @@ROWCOUNT > 0 BEGIN SET @Result = 1 SELECT @Result RETURN @Result END --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] OFF END The TVP: CREATE TYPE [dbo].[TestTVP] AS TABLE( [GroupInsertID] [varchar](50) NOT NULL, [FirstName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL ) GO The the auto gen tool runs, it creates the following erroneous method: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(string UserDetails, int? Result) { SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); sp.Command.AddParameter("@UserDetails", UserDetails, DbType.AnsiString, null, null); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } It sets UserDetails as type string. As it's good form to have two folders for a SubSonic DAL - Custom and Generated, I created a StoredProcedures.cs partial class in Custom that looks like this: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(DataTable dt, int? Result) { DataSet ds = new DataSet(); SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); // TODO: Modify the SubSonic code base in sp.Command.AddParameter to accept // a parameter type of System.Data.SqlDbType.Structured, as it currently only accepts // System.Data.DbType. //sp.Command.AddParameter("@UserDetails", dt, System.Data.SqlDbType.Structured null, null); sp.Command.AddParameter("@UserDetails", dt, SqlDbType.Structured); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } As you can see, the method signature now contains a DataTable, and with my modification to the SubSonic framework, this now works perfectly. I'm wondering if the SubSonic guys can modify the auto-gen to recognize a TVP in a sproc signature, as to avoid having to re-write the warpper? Does SubSonic 3.x support Structured data types? Also, I'm sure many will be interested in using this code, so where can I upload the new code? Thanks.

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • Where does the version number come from?

    - by Robert Schneider
    I have a version control system (e.g. Subversion) and now I'd like to set up a build process. Now I have to create a version number and insert it into the system. But where does the version number come from and get into? Assume I want to use this common <major.<minor.<bugfix/revision scheme. Should I pass a number to the build script? Or should I pass arguments like increaseMajor, increaseMinor, increaseRevision? Or would you recommend to create a branch with the number which will be detected by the build script? I could imagine that the major and minor version number have to be put in manually somewhere. The revision number could be increased automaically. But still I don't know where I would place the major and minor number. In my case I have some php files that I would like to zip, but before I have to insert some version numbers into php file. I have edited this post to try to make my request clearer: I do not use Subversion, that was just an example. And I don't want to discuss the version number scheme. Imagine I want to create version 3.5.0 or 3.5.1. Would I pass this version number to a build script? Would the script create the branch in the repository with this number or would it expect that someone has already created this branch? Manually? Or would the build script look for name of the branch (e.g. '3.5.1) and use it for further things? And does the version number come from my brain or is it automatically created (I guess the major/minor number it comes from my little brain and revision number is created)? Or would you place the number into a file that may gets inserted into the repository? I guess if would use a release management tool I would insert the version number there. But I don't use one yet.

    Read the article

  • read user Input of custom dialogs

    - by urobo
    I built a custom dialog starting from an AlertDialog to obtain login information from a user. So the dialog contains two EditText fields, using the layoutinflater service I obtain the layout and I'm saving a reference to the fields. LayoutInflater inflater = (LayoutInflater) Home.this.getSystemService(LAYOUT_INFLATER_SERVICE); layoutLogin = inflater.inflate(R.layout.login,(ViewGroup) findViewById(R.id.rl)); usernameInput =((EditText)findViewById(R.id.getNewUsername)); passwordInput = ((EditText)findViewById(R.id.getNewPassword)); Then I have my overridden onCreateDialog(...) : { AlertDialog d = null; AlertDialog.Builder builder = new AlertDialog.Builder(this); switch(id){ ... case Home.DIALOG_LOGIN: builder.setView(layoutLogin); builder.setMessage("Sign in to your DyCaPo Account").setCancelable(false); d=builder.create(); d.setTitle("Login"); Message msg = new Message(); msg.setTarget(Home.this.handleLogin); Bundle data = new Bundle(); data.putString("username", usernameInput.getText().toString());// <---null pointer Exception data.putString("password", passwordInput.getText().toString()); msg.setData(data); d.setButton(Dialog.BUTTON_NEUTRAL,"Sign in",msg); break; ... return d; } and the handler set in the Message: private Handler handleLogin= new Handler(){ /* (non-Javadoc) * @see android.os.Handler#handleMessage(android.os.Message) */ @Override public void handleMessage(Message msg) { // TODO Auto-generated method stub Log.d("Message Received", msg.getData().getString("username")+ msg.getData().getString("password")); } }; which for now works as a debugging tool. That's all. The Question is: what am I doing wrong? Because when I reach the line highlighted in the code ( the line in which I read the fields in the dialog ) I always get a null pointer exception. Could somebody please tell me the reason why it is so? And give some guidelines to work with dialogs. Thanks in advance!

    Read the article

  • Get HDD (and NOT Volume) Serial Number on Vista Ultimate 64 bit

    - by TheAgent
    Hi all. I was once looking for getting the HDD serial number without using WMI, and I found it. The code I found and posted on StackOverFlow.com works very well on 32 bit Windows, both XP and Vista. The trouble only begins when I try to get the serail number on 64 bit OSs (Vista Ultimate 64, specifically). The code returns String.Empty, or a Space all the time. Anyone got an idea how to fix this? EDIT: I used the tools Dave Cluderay suggested, with interesting results: Here is the output from DiskId32, on Windows XP SP2 32-bit: To get all details use "diskid32 /d" Trying to read the drive IDs using physical access with admin rights Drive 0 - Primary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Trying to read the drive IDs using the SCSI back door Drive 4 - Tertiary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Trying to read the drive IDs using physical access with zero rights **** STORAGE_DEVICE_DESCRIPTOR for drive 0 **** Vendor Id = [] Product Id = [MAXTOR STM3160215AS] Product Revision = [3.AAD] Serial Number = [] **** DISK_GEOMETRY_EX for drive 0 **** Disk is fixed DiskSize = 160041885696 Trying to read the drive IDs using Smart Drive 0 - Primary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Hard Drive Serial Number__________: 6RA26XK3 Hard Drive Model Number___________: MAXTOR STM3160215AS And DiskId32 run on Windows Vista Ultimate 64-bit: To get all details use "diskid32 /d" Trying to read the drive IDs using physical access with admin rights Trying to read the drive IDs using the SCSI back door Trying to read the drive IDs using physical access with zero rights **** STORAGE_DEVICE_DESCRIPTOR for drive 0 **** Vendor Id = [MAXTOR S] Product Id = [TM3160215AS] Product Revision = [3.AA] Serial Number = [] **** DISK_GEOMETRY_EX for drive 0 **** Disk is fixed DiskSize = 160041885696 Trying to read the drive IDs using Smart Hard Drive Serial Number__________: Hard Drive Model Number___________: Notice how much lesser the information is on Vista, and how the Serial Number is not returned. Also the other tool, EnumDisk, refers to my hard disks on Vista as "SCSI" as opposed to "ATA" on Windows XP. Any ideas? EDIT 2: I'm posting the results from EnumDisks: On Windows XP SP2 32-bit: Properties for Device 1 Device ID: IDE\DiskMAXTOR_STM3160215AS_____________________3.AAD___ Adapter Properties ------------------ Bus Type : ATA Max. Tr. Length: 0x20000 Max. Phy. Pages: 0xffffffff Alignment Mask : 0x1 Device Properties ----------------- Device Type : Direct Access Device (0x0) Removable Media : No Product ID : MAXTOR STM3160215AS Product Revision: 3.AAD Inquiry Data from Pass Through ------------------------------ Device Type: Direct Access Device (0x0) Vendor ID : MAXTOR S Product ID : TM3160215AS Product Rev: 3.AA Vendor Str : *** End of Device List

    Read the article

  • Cascade Saves with Fluent NHibernate AutoMapping - Old Anwser Still Valid?

    - by Glenn
    I want to do exactly what this question asks: http://stackoverflow.com/questions/586888/cascade-saves-with-fluent-nhibernate-automapping Using Fluent Nhibernate Mappings to turn on "cascade" globally once for all classes and relation types using one call rather than setting it for each mapping individually. The answer to the earlier question looks great, but I'm afraid that the Fluent Nhibernate API altered its .WithConvention syntax last year and broke the answer... either that or I'm missing something. I keep getting a bunch of name space not found errors relating to the IOneToOnePart, IManyToOnePart and all their variations: "The type or namespace name 'IOneToOnePart' could not be found (are you missing a using directive or an assembly reference?)" I've tried the official example dll's, the RTM dll's and the latest build and none of them seem to make VS 2008 see the required namespace. The second problem is that I want to use the class with my AutoPersistenceModel but I'm not sure where to this line: .ConventionDiscovery.AddFromAssemblyOf() in my factory creation method. private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(SQLiteConfiguration.Standard.UsingFile(DbFile)) .Mappings(m => m.AutoMappings .Add(AutoMap.AssemblyOf<Shelf>(type => type.Namespace.EndsWith("Entities")) .Override<Shelf>(map => { map.HasManyToMany(x => x.Products).Cascade.All(); }) ) )//emd mappings .ExposeConfiguration(BuildSchema) .BuildSessionFactory();//finalizes the whole thing to send back. } Below is the class and using statements I'm trying using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using FluentNHibernate.Conventions; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; using FluentNHibernate.Mapping; namespace TestCode { public class CascadeAll : IHasOneConvention, IHasManyConvention, IReferenceConvention { public bool Accept(IOneToOnePart target) { return true; } public void Apply(IOneToOnePart target) { target.Cascade.All(); } public bool Accept(IOneToManyPart target) { return true; } public void Apply(IOneToManyPart target) { target.Cascade.All(); } public bool Accept(IManyToOnePart target) { return true; } public void Apply(IManyToOnePart target) { target.Cascade.All(); } } }

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • MSBuild appears to only use old output files for custom build tools

    - by sixlettervariables
    I have an ANTLR grammar file as part of a C# project file and followed the steps outlined in the User Manual. <Project ...> <PropertyGroup> <Antlr3ToolPath>$(ProjectDir)tools\antlr-3.1.3\lib</Antlr3ToolPath> <AntlrCleanupPath>$(ProjectDir)AntlrCleanup\$(OutputPath)</AntlrCleanupPath> </PropertyGroup> <ItemGroup> <Antlr3 Include="Grammar\Foo.g"> <OutputFiles>FooLexer.cs;FooParser.cs</OutputFiles> </Antlr3> <Antlr3 Include="Grammar\Bar.g"> <OutputFiles>BarLexer.cs;BarParser.cs</OutputFiles> </Antlr3> </ItemGroup> <Target Name="GenerateAntlrCode" Inputs="@(Antlr3)" Outputs="%(Antlr3.OutputFiles)"> <Exec Command="java -cp %22$(Antlr3ToolPath)\antlr-3.1.3.jar%22 org.antlr.Tool -message-format vs2005 @(Antlr3Input)" Outputs="%(Antlr3Input.OutputFiles)" /> <Exec Command="%22$(AntlrCleanupPath)\AntlrCleanup.exe%22 @(Antlr3Input) %(Antlr3Input.OutputFiles)" /> </Target> <ItemGroup> <!-- ...other files here... --> <Compile Include="Grammar\FooLexer.cs"> <AutoGen>True</AutoGen> <DesignTime>True</DesignTime> <DependentUpon>Foo.g</DependentUpon> </Compile> <Compile Include="Grammar\FooParser.cs"> <AutoGen>True</AutoGen> <DesignTime>True</DesignTime> <DependentUpon>Foo.g</DependentUpon> </Compile> <!-- ... --> </ItemGroup> </Project> For whatever reason, the Compile steps only use old versions of the code, no amount of tweaking appears to help.

    Read the article

  • UpdateProgress with UpdatePanel not showing up in User control when page is loading

    - by Carter
    Is this typical behavior of the UpdateProgress for an ASP.Net UpdatePanel? I have an update panel with the UpdateProgress control inside of a user control window on a page. If I then make the page in the background do some loading and click a button in the user control update panel the UpdateProgress does not show up at all. It's like the UpdatePanels refresh request is not even registered until after the actual page is done doing it's business. It's worth noting that it will show up if nothing is happening in the background. The functionality I want is what you would expect. I want to loader to show up if it has to wait for anything to get it's refresh done when after the button is clicked. I know I can get this functionality if I just use jquery ajax with a static web method, but you can't have static web methods inside of a user control. I could have it in the page but it really doesn't belong there. A full-blown wcf wouldn't really be worth it in this case either. I'm trying to compromise with an UpdatePanel but these things always seem to cause me some kind of trouble. Maybe this is just the way it works? Edit:So I'll clarify a bit what I'm doing. What's happening is I have a page and all it has on it are some tools on the side and a big map. When the page initially loads it takes some time to load the map. Now if while it's loading I open up the tool (a user control) that has the update panel in question in it and click the button on this user control that should refresh the update panel with new data and show the loading sign (in the updateprogress) then the UpdateProgress loading image does not show up. However, the code run by the button click does run after the page is done loading (as expected) and The UpdateProgress will show up if nothing on the page containing the user control is loading. I just want the loader to show up while the page is loading. I thought my problem was that perhaps the map loading is in an update panel and my UpdateProgress was only being associated with the update panel for the user control's update panel. Hence, I would get no loading icon when the map was loading. This is not the case though.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >