Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 377/405 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Adventures in Windows 8: Solving activation errors

    - by Laurent Bugnion
    Note: I tagged this article with “MVVM” because I got a few support requests for MVVM Light regarding this exact issue. In fact it is a Windows 8 issue and has nothing to do with MVVM Light per se… Sometimes when you work on a Windows 8 app, you will get a very annoying issue when starting the app. In that case, the app doesn’t not even start past the Splash screen. Putting a breakpoint in App.xaml.cs doesn’t help because the app doesn’t even reach that point! So what exactly is happening? Well when a Windows 8 app starts, the system is performing a few check first. One of the checks, for instance, is to see if an app with the same package ID is already available. The package ID is a unique value set in the package manifest. In the Solution Explorer, double click on Package.appxmanifest. This opens the manifest in a special editor Click on the Packaging tab See the GUID under Package Name. This is the unique ID I am talking about. If there is a conflict (i.e. if an app is already installed with the exact same ID), Windows will warm the user that the app is already installed. However when you are in the process of developing an app, you install and uninstall the same app many many times (every time that you start in Visual Studio), and sometimes some issues arise, for instance failing to uninstall the app before starting the new instance of the same app. First step if you get such an error When the application fails to start past the splash screen, the first step is to identify what kind of error happened. In my experience the “already installed” is by far the most frequent (in fact I never had another such error), but it can be something else. An annoying thing is that the popup that shows the error is usually started below the Windows 8 app, and so you don’t even see it! This is especially true if you run this in the Simulator. In that case, do the following: Press on the Simulator’s home button, then press on the Desktop tile on the Start menu. The error popup should be shown on the desktop. If your applications runs on the Local machine, you also do the same and press the Windows button, and then from the Start menu press the Desktop tile. Deployment error in Studio Sometimes the same error causes Visual Studio to fail launching the application at all with a deployment error. This is a better case, because at least it is clear that there is an issue. In that case, write down the code that is shown in the Error window (for instance 0x80073D05 in the example below). Once you have the error code, go to the “Troubleshooting packaging, deployment, and query of Windows Store apps” page and look up the code in question. In my case, the error was “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED”, “An error occurred while deleting the package's previously existing application data.” Solving the “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED” issue Update: Before trying the below, you can also try the simple steps: Exit Visual Studio Go to the Start menu Locate your app’s tile. It should be visible in the Start menu directly, towards the far end on the right. Right click the tile and select Uninstall from the App Bar. Restart Visual Studio and try again. Sometimes it helps. If it doesn’t, then try the following: In order to solve the case where Windows, for any reason, fails to delete the existing application before starting the new instance, follow the steps: Open the Package.appxmanifest in Visual Studio Open the Packaging tab. Change the Package name. For tests you can just try to change the last character of the GUID, though I would recommend creating a brand new GUID. Press Start Type GUID Start the GUID Generator application Select Registry Format Press Copy. Paste the new GUID in place of the Package Name in Visual Studio Important: don’t forget to remove the curly brackets at the beginning and at the end of the newly pasted GUID. Then you just have to cross your fingers and start the application again… If it works, celebrate. if it doesn’t work… well at this point I am not sure so good luck with that ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Using FFmpeg in c#?

    - by daniel
    So i know its a fairly big challenge but i want to write a basic movie player/converter in c# using the FFmpeg library. However the first obstacle i need to overcome is wrapping the FFmpeg library in c#. I downloaded ffmpeg but couldn't compile it on windows, so i downloaded a precompiled version for me. Ok awesome. Then i started looking for c# wrappers. I have looked around and have found a few wrappers such as SharpFFmpeg (http://sourceforge.net/projects/sharpffmpeg/) and ffmpeg-sharp (http://code.google.com/p/ffmpeg-sharp/). First of all i wanted to use ffmpeg-sharp as its LGPL and SharpFFmpeg is GPL. However it had quite a few compile errors. Turns out it was written for the mono compiler, i tried compiling it with mon but couldn't figure out how. I then started to manually fix the compiler errors myself, but came across a few scary ones and thought i'd better leave those alone. So i gave up on ffmpeg-sharp. Then i looked at SharpFFmpeg and it looks like what i want, all the functions P/Invoked for me. However its GPL? Both the AVCodec.cs and AVFormat.cs files look like ports of avcodec.c and avformat.c which i reckon i could port myself? Then not have to worry about licencing. But i wan't to get this right before i go ahead and start coding. Should I: Write my own c++ library for interacting with ffmpeg, then have my c# program talk to the c++ library in order to play/convert videos etc. OR Port avcodec.h and avformat.h (is that all i need?) to c# by using a whole lot of DllImports and write it entirely in c#? First of all consider that i'm not great at c++ as i rarely use it but i know enough to get around. The reason i'm thinking #1 might be the better option is that most FFmpeg tutorials are in c++ and i'd also have more control over memory management that if i was to do it in c#. What do you think? Also would you happen to have any usefull links (perhaps a tutorial) for using FFmpeg? EDIT: spelling mistakes

    Read the article

  • Silverlight 4 WCF RIA Services and MVVM is not as simple

    - by Thomas Jaskula
    [Disclaimer: I'm ASP.NET MVC Developer] Hi, I'm looking for some best practices with implementing MVVM pattern with WCF RIA in Silverlight 4. I'm not looking to use MEF of IoC for locating my ViewModels. What I would like to know is how to apply MVVM pattern with Silverlight 4 and WCF RIA. I don't want to use other stuff like Prism or MVVM Light toolkit. I found many examples on Internet showing how it is wonderful to drag and drop a datasource on the view and the job is done (it reminds me about my first VB6 developments). I tried to implement MVVM with WCF RIA and it's not strightforward at all. If I understand, the MVVM should contain all the logic in order to unit test it in isolation but when it comes to combine it with WCF RIA it's another story. I have the following questions. Can I use a generated metadata as model ? It would be easier to use it that if I write all from the scratch. As I saw the only way I could get data is through DomainContext or through direct binding in the view (local ressource). I don't want the direct binding in the view, not testable at all. On the other hand I can't use DomainContext, it doesn't expose any single entity !!! All I have is the EntitySet that I can bind to datagrid. How do I bind a single Entity to the DataForm from the ViewModel ? How do I udpate the model to the database ? How do I navigate from one Entity to a collection of it's itemps. For example if I have a Company Entity I would like to show a DataFrom to update a entite informations and a datagrid to show companies adresses. When saving a form would like to save information to Company and for example an information avout which adress was selected as active. Please help me understand how to do it well. Or maybe I should drop the WCF RIA and to do it with WCF from scratch ? What do you think ?

    Read the article

  • VHDL - Problem with std_logic_vector

    - by wretrOvian
    Hi, i'm coding a 4-bit binary adder with accumulator: library ieee; use ieee.std_logic_1164.all; entity binadder is port(n,clk,sh:in bit; x,y:inout std_logic_vector(3 downto 0); co:inout bit; done:out bit); end binadder; architecture binadder of binadder is signal state: integer range 0 to 3; signal sum,cin:bit; begin sum<= (x(0) xor y(0)) xor cin; co<= (x(0) and y(0)) or (y(0) and cin) or (x(0) and cin); process begin wait until clk='0'; case state is when 0=> if(n='1') then state<=1; end if; when 1|2|3=> if(sh='1') then x<= sum & x(3 downto 1); y<= y(0) & y(3 downto 1); cin<=co; end if; if(state=3) then state<=0; end if; end case; end process; done<='1' when state=3 else '0'; end binadder; The output : -- Compiling architecture binadder of binadder ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): No feasible entries for infix operator "xor". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): Type error resolving infix expression "xor" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in left operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Type error resolving infix expression "or" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): No feasible entries for infix operator "&". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): Type error resolving infix expression "&" as type ieee.std_logic_1164.std_logic_vector. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(39): VHDL Compiler exiting I believe i'm not handling std_logic_vector's correctly. Please tell me how? :(

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • Android Packaging Problem: resources.ap_ does not exist

    - by Galip
    I am trying to fix a problem in Eclipse for like 3 hours and I haven't made any progress. Tomorrow is the customer coming to look at my app, and I have no time left. This is really frustrating! This morning when I was coding and I wanted to run my app on my device Eclipse crashed all of a sudden. 'aapt.exe has stopped working' After this Eclipse wasn't starting anymore. It froze at the splash image. I looked on the internet and tried different solutions like going back to Java SE 6 update 20, changing .ini file etc. in the end reinstalling Eclipse did the job. Shortly after that the 'aapt.exe has stopped working' returned. I found a solution by changing my projects target. 1.5, 1.6, 2.2 doesn't matter, as long as it's different than the one before. Now I get the Error generating final archive: java.io.FileNotFoundException: C:\xxx\bin\resources.ap_ does not exist error. I tried clean but that doesn't work. Deleting and automatically regenarting R.java also didn't work. I ran the same code in Netbeans with the Android plugin and there it gives me the 'aapt.exe has stopped working' again :( Please guys, how can I fix this? Edit: I think I may have found the reason. These are the error lines in the console: org.xmlpull.v1.XmlPullParserException: Binary XML file line #3: <bitmap> requires a valid src attribute at android.graphics.drawable.BitmapDrawable.inflate(BitmapDrawable.java:341) at android.graphics.drawable.Drawable.createFromXmlInner(Drawable.java:779) at android.graphics.drawable.Drawable.createFromXml(Drawable.java:720) at com.android.layoutlib.bridge.ResourceHelper.getDrawable(ResourceHelper.java:150) at com.android.layoutlib.bridge.BridgeTypedArray.getDrawable(BridgeTypedArray.java:668) at android.view.View.<init>(View.java:1846) at android.view.View.<init>(View.java:1795) at android.view.ViewGroup.<init>(ViewGroup.java:282) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) [2011-01-17 16:37:20 - gegevens.xml] Unable to resolve drawable "com.android.layoutlib.utils.ResourceValue@267e33de" in attribute "background" The file it's talking about is 'bg.png'. It's a small png file which I repeat in a .xml file. <?xml version="1.0" encoding="utf-8"?> <bitmap xmlns:android="http://schemas.android.com/apk/res/android" android:src="@drawable/bg" android:tileMode="repeat" /> This file has worked from the first time without any problems. I deleted it from the drawable folder, waited for an error message, and then added it back. The red x next to the foldername got away, but still nothing different...

    Read the article

  • PHP SimpleXML recursive function to list children and attibutes

    - by Phill Pafford
    I need some help on the SimpleXML calls for a recursive function that lists the elements name and attributes. Making a XML config file system but each script will have it's own config file as well as a new naming convention. So what I need is an easy way to map out all the elements that have attributes, so like in example 1 I need a simple way to call all the processes but I don't know how to do this without hard coding the elements name is the function call. Is there a way to recursively call a function to match a child element name? I did see the xpath functionality but I don't see how to use this for attributes. Any ideas? Also does the XML in the examples look correct? can I structure my XML like this? Example 1: <application> <processes> <process id="123" name="run batch A" /> <process id="122" name="run batch B" /> <process id="129" name="run batch C" /> </processes> <connections> <databases> <database usr="test" pss="test" hst="test" dbn="test" /> </databases> <shells> <ssh usr="test" pss="test" hst="test-2" /> <ssh usr="test" pss="test" hst="test-1" /> </shells> </connections> </application> Example 2: <config> <queues> <queue id="1" name="test" /> <queue id="2" name="production" /> <queue id="3" name="error" /> </queues> </config> Pseudo code: // Would return matching process id getProcess($process_id) { return the process attributes as array that are in the XML } // Would return matching DBN (database name) getDatabase($database_name) { return the database attributes as array that are in the XML } // Would return matching SSH Host getSSHHost($ssh_host) { return the ssh attributes as array that are in the XML } // Would return matching SSH User getSSHUser($ssh_user) { return the ssh attributes as array that are in the XML } // Would return matching Queue getQueue($queue_id) { return the queue attributes as array that are in the XML } EDIT: Can I pass two parms? on the first method you have suggested @Gordon public function findProcessById($id, $name) { $attr = false; $el = $this->xml->xpath("//process[@id='$id']"); // How do I also filter by the name? if($el && count($el) === 1) { $attr = (array) $el[0]->attributes(); $attr = $attr['@attributes']; } return $attr; }

    Read the article

  • Access Violation Using memcpy or Assignment to an Array in a Struct

    - by Synetech inc.
    Hi, I wrote a program last night that worked just fine but when I refactored it today to make it more extensible, I ended up with a problem. The original version had a hard-coded array of bytes. After some processing, some bytes were written into the array and then some more processing was done. To avoid hard-coding the pattern, I put the array in a structure so that I could add some related data and create an array of them. However now, I cannot write to the array in the structure. Here’s a pseudo-code example: main() { char pattern[]="\x32\x33\x12\x13\xba\xbb"; PrintData(pattern); pattern[2]='\x65'; PrintData(pattern); } That one works but this one does not: struct ENTRY { char* pattern; int somenum; }; main() { ENTRY Entries[] = { {"\x32\x33\x12\x13\xba\xbb\x9a\xbc", 44} , {"\x12\x34\x56\x78", 555} }; PrintData(Entries[0].pattern); Entries[0].pattern[2]='\x65'; //0xC0000005 exception!!! :( PrintData(Entries[0].pattern); } The second version causes an access violation exception on the assignment. I’m sure it’s because the second version allocates memory differently, but I’m starting to get a headache trying to figure out what’s what or how to get fix this. (I’m currently working around it by dynamically allocating a buffer of the same size as the pattern array, copying the pattern to the new buffer, making the changes to the buffer, using the buffer in the place of the pattern array, and then trying to remember to free the—temporary—buffer.) (Specifically, the original version cast the pattern array—+offset—to a DWORD* and assigned a DWORD constant to it to overwrite the four target bytes. The new version cannot do that since the length of the source is unknown—may not be four bytes—so it uses memcpy instead. I’ve checked and re-checked and have made sure that the pointers to memcpy are correct, but I still get an access violation. I use memcpy instead of str(n)cpy because I am using plain chars (as an array of bytes), not Unicode chars and ignoring the null-terminator. Using an assignment as above causes the same problem.) Any ideas? Thanks a lot.

    Read the article

  • WPF-Prism CanExecute method not being called

    - by nareshbhatia
    I am coding a simple login UserControl with two TextBoxes (Username and Password) and a Login button. I want the Login button to be enabled only when the username and password fields are filled in. I am using Prism and MVVM. The LoginViewModel contains a property called LoginCommand that is bound to the Login button. I have a CanLoginExecute() method in my ViewModel but it fires only when the application comes up and then never again. So the Login button is never enabled. What am I missing? Here's my xaml: <TextBox x:Name="username" Text="{Binding Path=Username, UpdateSourceTrigger=PropertyChanged, ValidatesOnDataErrors=True}" /> <TextBox x:Name="password" Text="{Binding Path=Password, UpdateSourceTrigger=PropertyChanged, ValidatesOnDataErrors=True}" /> <Button Content="Login" cmnd:Click.Command="{Binding LoginCommand}" /> Here's my ViewModel class LoginViewModel : IDataErrorInfo, INotifyPropertyChanged { public LoginViewModel() { this.LoginCommand = new DelegateCommand<object>( this.LoginExecute, this.CanLoginExecute); } private Boolean CanLoginExecute(object dummyObject) { return (string.IsNullOrEmpty(Username) || string.IsNullOrEmpty(Password)) ? false : true; } private void LoginExecute(object dummyObject) { if (CheckCredentials(Username, Password)) { .... } } #region IDataErrorInfo Members public string Error { get { throw new NotImplementedException(); } } public string this[string columnName] { get { string result = null; if (columnName == "Username") { if (string.IsNullOrEmpty(Username)) result = "Please enter a username"; } else if (columnName == "Password") { if (string.IsNullOrEmpty(Password)) result = "Please enter a password"; } return result; } } #endregion // IDataErrorInfo Members #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; void OnPropertyChanged(string propertyName) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } #endregion // INotifyPropertyChanged Members #region Properties private String _username; public String Username { get { return _username; } set { if (value == _username) return; _username = value; this.OnPropertyChanged("Username"); } } private String _password; public String Password { get { return _password; } set { if (value == _password) return; _password = value; this.OnPropertyChanged("Password"); } } public ICommand LoginCommand { get; private set; } #endregion // Properties }

    Read the article

  • Log4r : logger inheritance, yaml configuration, alternatives ?

    - by devlearn
    Hello, I'm pretty new to ruby environments and I was looking for a nice logging framework to use it my ruby and rails applications. In my previous experiences I have successfully used log4j and log4p (the perl port) and was expecting the same level of usability (and maturity) with log4r. However I must say that there are a number of things that are not clear at all in the log4r framework. 1 Logger Inheritance The logger inheritance does not seem to be managed at all ! If I declare a logger named 'myapp' and then try to get a logger name 'myapp::engine', the lookup will end with a NameError. I would expect that the framework returns the root logger according to the naming scheme and to use the 'myapp' logger. Q1 : Of course I can work around this and manage the names by myself with a lookup method, however is there a cleaner way to do this without any extra coding ? 2 YAML configuration Second thing that confuses me is the yaml configuration. On the log4r site there are literally no information about this system, the doc links forward to missing pages, so all the info I can find about is contained in the examples directory of the gem. I was pretty confused with the fact that the yaml configuration must contain the pre_config section, and that I need to define my own levels. If I remove the pre_config secion, or replace all the “custom” levels by the standard ones ( debug, info, warn, fatal ) , the example will throw the following error : log4r/yamlconfigurator.rb:68:in `decode_yaml': Log level must be in 0..7 (ArgumentError) So there seems to be no way of using a simple file where we only declare the loggers and appenders for the framework. Q2 : I realy think that I missed something and that must be a way of providing a simple yaml conf file. Do you have any examples of such an usage ? 3 Variables substitution in XML file Q3 : The Yaml configuration system seems to provide such a feature however I was unable to find a similar feature with XML files. Any ideas ? 4 Alternatives ? I must say that I'm very disappointed by the feature level and the maturity of log4r compared to the log4j and other log4j ports. I run into this framework with a solid background of logging APIs in other languages and find myself working around in all kinds just to make 'basic things' running in a “real world application”. By that I mean a complex application composed of several gems, console/scripting apps, and a rails web front end where the configuration must be mutualized and where we make intensive usage of namespaces and inheritance. I've run several searches in order to find something more suitable or mature, but did not find anything similar. Q4 : Do you guys know any (serious) alternatives to log4r framework that could be used in a enterprise class app ? Thanks reading all of this ! I'd really appreciate any pointers, Kind Regards,

    Read the article

  • surfaceview + glsurfaceview + framelayout

    - by pohtzeyun
    Hi, I'm new at this (java and opengl) so please bear with me if the answer to the question is simple. :) I'm trying to get a camera preview screen with the ability to display 3d objects simultaneously. Having gone through the samples at the api demos, I thought combining the code for the the examples at the api demo would suffice. But somehow its not working. The forces me to shut down upon startup and the error is mentioned as null pointer exception. Could someone share with me where did I go wrong and how to proceed from there. How I did the combination for the code is as shown below: myoverview.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"> <android.opengl.GLSurfaceView android:id="@+id/cubes" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"/> <SurfaceView android:id="@+id/camera" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </FrameLayout> myoverview.java import android.app.Activity; import android.os.Bundle; import android.view.SurfaceView; import android.view.Window; public class MyOverView extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // camera view as the background SurfaceView cameraView = (SurfaceView) findViewById(R.id.camera); cameraView = new CameraView(this); // visual of both cubes GLSurfaceView cubesView = (GLSurfaceView) findViewById(R.id.cubes); cubesView = new GLSurfaceView(this); cubesView.setRenderer(new CubeRenderer(false)); // set view setContentView(R.layout.myoverview); } } GLSurfaceView.java import android.content.Context; class GLSurfaceView extends android.opengl.GLSurfaceView { public GLSurfaceView(Context context) { super(context); } } NOTE : I didnt list the rest of the files as they are just copies of the api demos. The cameraView refers to the camerapreview.java example and the CubeRenderer refers to the CubeRenderer.java and Cube.java example. Any help would be appreciated as I've been stuck at this for a couple of days :p Thanks Sorry, didnt realise that the coding was out of place due to formatting mistakes. :p

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • What IDE to use for Python

    - by husayt
    As a Python newbie, it is interesting to know what IDE's ("GUIs/editors") others use for Python coding. If you can just give the name (e.g. Textpad, Eclipse ..) that will be enough. If it is already mentioned, you can just vote for it. But if you can also give some more comparative information, that will be much appreciated. Thanks. Update: Results so far PyDev with Eclipse (CP, F, AC, PD, EM, SI, MLS, UML, SC, UT, LN, CF, BM) Komodo (CP, C/F, MLS, PD, AC, SC, SI, BM, LN, CF, CT) Emacs (CP, F, AC, MLS, PD, EM, SC, SI, BM, LN, CF, CT, UT, UML) Vim (CP, F, AC, MLS, SI, BM, LN, CF ) TextMate (Mac, CT, CF, MLS, SI, BM, LN) Gedit (Linux, F, AC, MLS, BM, LN, CT [sort of]) Idle (CP, F, AC) PIDA (Linux, CP, F, AC, MLS, SI, BM, LN, CF)(VIM Based) NotePad++ (Windows) BlueFish (Linux) JEdit (CP, F, BM, LN, CF, MLS) E-Texteditor (TextMate Clone for Windows) WingIde (CP, C, AC, MLS (support for C), PD, EM, SC, SI, BM, LN, CF, CT, UT) Eric Ide (CP, F, AC, PD, EM, SI, LN, CF, UT) Pyscripter (Windows, F, AC, PD, EM, SI, LN, CT, UT) ConTEXT (Windows, C) SPE (F, AC, UML) SciTE (CP, F, MLS, EM, BM, LN, CF, CT, SH) Zeus (W, C, BM, LN, CF, SI, SC, CT) NetBeans (CP, F, PD, UML, AC, MLS, SC, SI, BM, LN, CF, CT, UT, RAD) DABO (CP) BlackAdder (C, CP, CF, SI) PythonWin (W, F, AC, PD, SI, BM, CF) Geany (CP, F, very limited AC, MLS, SI, BM, LN, CF) UliPad (CP, F, AC, PD, MLS, SI, LI, CT, UT, BM) Boa Constructor (CP, F, AC, PD, EM, SI, BM, LN, UML, CF, CT) ScriptDev (W, C, AC, MLS, PD, EM, SI, BM, LN, CF, CT) Spider (CP, F, AC) Editra (CP, F, AC, MLS, SC, SI, BM, LN, CF) Pfaide (Windows, C, AC, MLS, SI, BM, LN, CF, CT) KDevelop (CP, F, MLS, SC, SI, BM, LN, CF) Acronyms used: CP - Cross Platfom C - Commercial F - Free AC - Automatic Code-completion MLS - Multi-Language Support PD - Integrated Python Debugging EM - ErrorMarkup SC - Source Control integration SI - Smart Indent BM - Bracket Matching LN - Line Numbering UML - UML editing / viewing CF - Code Folding CT - Code Templates UT - Unit Testing UID - Gui Designer (e.g. QT, Eric, ..) DB - integrated database support RAD - Rapid app development support I don't mention basics like Syntax highlighting as I expect these by default. This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers. PS. Can you help me to add features of the above editors to the list (like autocomplete, debugging, or etc)?

    Read the article

  • ASP.NET web forms as ASP.NET MVC

    - by lopkiju
    I am sorry for possible misleading about the title, but I have no idea for a proper title. Feel free to edit. Anyway, I am using ASP.NET Web Forms, and maybe this isn't how web forms is intended to be used, but I like to construct and populate HTML elements manually. It gives me more control. I don't use DataBinding and that kind of stuff. I use SqlConnection, SqlCommand and SqlDataReader, set SQL string etc. and read the data from the DataReader. Old school if you like. :) I do create WebControls so that I don't have to copy-paste every time I need some control, but mostly, I need WebControls to render as HTML so I can append that HTML into some other function that renders the final output with the control inside. I know I can render a control with control.RenderControl(writer), but this can only be done in (pre)Render or RenderContents overrides. For example. I have a dal.cs file where is stored all static functions and voids that communicate with the database. Functions mostly return string so that it can be appended into some other function to render the final result. The reason I am doing like this is that I want to separate the coding from the HTML as much as I can so that I don't do <% while (dataReader.Read()) % in HTML and display the data. I moved this into a CodeBehind. I also use this functions to render in the HttpHandler for AJAX response. That works perfectly, but when I want to add a control (ASP.NET Server control (.cs extension, not .ascx)) I don't know how to do that, so I see my self writing the same control as function that returns string or another function inside that control that returns string and replaces a job that would RenderContents do, so that I can call that function when I need control to be appended into a another string. I know this may not be a very good practice. As I see all the tutorials/videos about the ASP.NET MVC, I think it suite my needs as with the MVC you have to construct everything (or most of it) by your self, which I am already doing right now with web forms. After this long intro, I want to ask how can I build my controls so I can use them as I mentioned (return string) or I have to forget about server controls and build the controls as functions and used them that way? Is that even possible with ASP.NET Server Controls (.cs extension) or am I right when I said that I am not using it right. To be clear, I am talking about how to properly use a web forms, but to avoid data binders because I want to construct everything by my self (render HTML in Code Behind). Someone might think that I am appending strings like "some " + "string", which I am not. I am using StringBuilder for that so there's no slowness. Every opinion is welcome.

    Read the article

  • Entrepreneur Needs Programmers, Architects, or Engineers?

    - by brand-newbie
    Hi guys (Ladies included). I posted on a related site, but THIS is the place to be. I want to build a specialized website. I am an entrepreneur and refining valuations now for venture capitalsists: i.e., determining how much cash I will need. I need help in understanding what human resources I need (i.e., Software Programmers, Architects, Engineers, etc.)??? Trust me, I have read most--if not all--of the threads here on the subject, and I can tell you I am no closer to the answer than ever. Here's my technology problem: The website will include (2) main components: a search engine (web crawler)...and a very large database. The search engine will not be a competitor to google, obviously; however, it "will" require bots to scour the web. The website will be, basically, a statistical database....where users should be able to pull up any statistic from "numerous" fields. Like any entrepreneur with a web-based vision, I'm "hoping" to get 100+ million registered users eventually. However, practically, we will start as small as feasible. As regards the technology (database architecture, servers, etc.), I do want quality, quality, quality. My priorities are speed, and the capaility to be scalable...so that if I "did" get globally large, we could do it without having to re-engineer anything. In other words, I want the back-end and the "infrastructure" to be scalable and professional....with emphasis on quality. I am not an IT professional. Although I've built several Joomla-based websites, I'm just a rookie who's only used minor javascript coding to modify a few plug-ins and components. The business I'm trying to create requires specialization and experts. I want to define the problem and let a capable team create the final product, and I will stay totally hands off. So who do you guys suggest I hire to run this thing? A software engineer? I was thinking I would need a "database engineer," a "systems security engineer", and maybe 2 or 3 "programmers" for the search engine. Also a web designer...and maybe a part-time graphic designer...everyone working under a single software engineer. What do you guys think? Who should I hire?...I REALLY need help from some people in the industry (YOU guys) on this. Is this project do-able in 6 months? If so, how many people will I need? Who exactly needs to head up this thing?...Senior software engineer, an embedded engineer, a CC++ engineer, a java engineer, a database engineer? And do I build this thing is Ruby or Java?

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • Loading a Page into a jQuery Dialog

    - by Dave
    I tend to follow a fairly "modular" approach to building applications and I recently started working with jQuery. The application I'm working on is going to be fairly large so I'm trying to break pieces out into separate files/modules when possible. One example of this is a "User Settings" dialog. This dialog has a form, a few tabs, and quite a good number of input fields so I want to develop it in a separate HTML file (PHP actually, but it can be considered HTML for the purposes of this example). It's an entire page in and of itself with all the tags you would expect such as: <html><head></head><body></body></html> So I can now develop, what I want to be, the dialog separate from the base application. This dialog has it's own Javascript (A LOT of Javascript, in fact) in the head as well, with jQuery $(document).ready(){} capturing, etc. Everything works flawlessly in isolation. However, when I attempt to load the jQuery modal dialog with the page (inside of the main application page), as one might expect, trouble ensues. Here's a brief, very simple, example of what it looks like: editUserDialog.load ("editUser.php", {id : $('#userList').val(), popup : "true"}, function () { editUserDialog.dialog ("option", "title" , "Edit User"); editUserDialog.dialog ('open'); }); (I'm passing in a "popup" flag to the page so that the page can determine its context -- i.e. as a page or inside of the jQuery dialog). Question 1: When I moved the code from the "head" into "body" (in the editUser.php page) it actually worked for the most part. It seemed that jQuery was calling the $(document).ready() function in the context of the body of the loaded file and not the head. Is this a bad idea? Question 2: Is my process for building this application just totally flawed to begin with? I've scoured the net to attempt to find a "best practices" sort of document to building reasonably large applications using jQuery/PHP without a lot of success so maybe there's something out there someone else is aware of that I've somehow missed. Thanks for bearing with me while I attempt to describe the issues I've encountered and I hope I've accurately described the problem.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >