Search Results

Search found 9894 results on 396 pages for 'primary interop assembly'.

Page 318/396 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • ??????DataGuard?????????

    - by JaneZhang(???)
         ??????Apply,???log_archive_dest_n ?????“DELAY=",??:DELAY=360(?????),????360??(6??)???:SQL>alter system set log_archive_dest_2='SERVICE=standby LGWR SYNC AFFIRM DELAY=360 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) COMPRESSION=ENABLE  DB_UNIQUE_NAME=standby';    ??????DELAY??,??????????,???30???    ??????,?????????????(real-time apply ),DELAY????????,????????????,??,????alert log?????????????:WARNING: Managed Standby Recovery started with USING CURRENT LOGFILEDELAY 360 minutes specified at primary ignored <<<<<<<<<    ?????,??????????,?????????MRP,??:SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; ???????????:1. ?????????:SQL> show parameter log_archive_dest_2 NAME                                 TYPE        VALUE------------------------------------ ----------- ------------------------------log_archive_dest_2                   string      SERVICE=STANDBY LGWR SYNC AFFI                                                RM VALID_FOR=(ONLINE_LOGFILES,                                                PRIMARY_ROLE) DB_UNIQUE_NAME=S                                                TANDBY 2. ???????5??:SQL> alter system set log_archive_dest_2='SERVICE=STANDBY LGWR SYNC AFFIRM delay=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'; 3. ??????: ????:SQL> alter system switch logfile;System altered. SQL>  select max(sequence#) from v$archived_log; MAX(SEQUENCE#)--------------           28 ??:Wed Jun 13 19:48:53 2012Archived Log entry 14 added for thread 1 sequence 28 ID 0x4c9d8928 dest 1:ARCb: Archive log thread 1 sequence 28 available in 5 minute(s)Wed Jun 13 19:48:54 2012Media Recovery Delayed for 5 minute(s) (thread 1 sequence 28) <<<<<<<<????Wed Jun 13 19:53:54 2012Media Recovery Log /home/oracle/arch1/standby/1_28_757620395.arc<<<<<5??????????Media Recovery Waiting for thread 1 sequence 29 (in transit) ?????,???????:http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_apply.htmOracle® Data Guard Concepts and Administration11g Release 2 (11.2)Part Number E25608-03

    Read the article

  • Lots of first chance Microsoft.CSharp.RuntimeBinderExceptions thrown when dealing with dynamics

    - by Orion Edwards
    I've got a standard 'dynamic dictionary' type class in C# - class Bucket : DynamicObject { readonly Dictionary<string, object> m_dict = new Dictionary<string, object>(); public override bool TrySetMember(SetMemberBinder binder, object value) { m_dict[binder.Name] = value; return true; } public override bool TryGetMember(GetMemberBinder binder, out object result) { return m_dict.TryGetValue(binder.Name, out result); } } Now I call it, as follows: static void Main(string[] args) { dynamic d = new Bucket(); d.Name = "Orion"; // 2 RuntimeBinderExceptions Console.WriteLine(d.Name); // 2 RuntimeBinderExceptions } The app does what you'd expect it to, but the debug output looks like this: A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll 'ScratchConsoleApplication.vshost.exe' (Managed (v4.0.30319)): Loaded 'Anonymously Hosted DynamicMethods Assembly' A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll Any attempt to access a dynamic member seems to output a RuntimeBinderException to the debug logs. While I'm aware that first-chance exceptions are not a problem in and of themselves, this does cause some problems for me: I often have the debugger set to "break on exceptions", as I'm writing WPF apps, and otherwise all exceptions end up getting converted to a DispatcherUnhandledException, and all the actual information you want is lost. WPF sucks like that. As soon as I hit any code that's using dynamic, the debug output log becomes fairly useless. All the useful trace lines that I care about get hidden amongst all the useless RuntimeBinderExceptions Is there any way I can turn this off, or is the RuntimeBinder unfortunately just built like that? Thanks, Orion

    Read the article

  • Missing Edit Option on Silverlight 4 DataForm

    - by rip
    I’m trying out the Silverlight 4 beta DataForm control. I don’t seem to be able to get the edit and paging options at the top of the control like I’ve seen in Silverlight 3 examples. Has something changed or am I doing something wrong? Here’s my code: <UserControl x:Class="SilverlightApplication7.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400" xmlns:dataFormToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.DataForm.Toolkit"> <Grid x:Name="LayoutRoot" Background="White"> <dataFormToolkit:DataForm HorizontalAlignment="Left" Margin="10" Name="myDataForm" VerticalAlignment="Top" /> </Grid> </UserControl> public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); this.Loaded += new RoutedEventHandler(MainPage_Loaded); } void MainPage_Loaded(object sender, RoutedEventArgs e) { Movie movie = new Movie(); myDataForm.CurrentItem = movie; } public enum Genres { Comedy, Fantasy, Drama, Thriller } public class Movie { public int MovieID { get; set; } public string Name { get; set; } public int Year { get; set; } public DateTime AddedOn { get; set; } public string Producer { get; set; } public Genres Genre { get; set; } } }

    Read the article

  • Serializing object with no namespaces using DataContractSerializer

    - by Yurik
    How do I remove XML namespaces from an object's XML representation serialized using DataContractSerializer? That object needs to be serialized to a very simple output XML. Latest & greatest - using .Net 4 beta 2 The object will never need to be deserialized. XML should not have any xmlns:... namespace refs Any subtypes of Exception and ISubObject need to be supported. It will be very difficult to change the original object. Object: [Serializable] class MyObj { string str; Exception ex; ISubObject subobj; } Need to serialize into: <xml> <str>...</str> <ex i:nil="true" /> <subobj i:type="Abc"> <AbcProp1>...</AbcProp1> <AbcProp2>...</AbcProp2> </subobj> </xml> I used this code: private static string ObjectToXmlString(object obj) { if (obj == null) throw new ArgumentNullException("obj"); var serializer = new DataContractSerializer( obj.GetType(), null, Int32.MaxValue, false, false, null, new AllowAllContractResolver()); var sb = new StringBuilder(); using (var xw = XmlWriter.Create(sb, new XmlWriterSettings { OmitXmlDeclaration = true, NamespaceHandling = NamespaceHandling.OmitDuplicates, Indent = true })) { serializer.WriteObject(xw, obj); xw.Flush(); return sb.ToString(); } } From this article I adopted a DataContractResolver so that no subtypes have to be declared: public class AllowAllContractResolver : DataContractResolver { public override bool TryResolveType(Type dataContractType, Type declaredType, DataContractResolver knownTypeResolver, out XmlDictionaryString typeName, out XmlDictionaryString typeNamespace) { if (!knownTypeResolver.TryResolveType(dataContractType, declaredType, null, out typeName, out typeNamespace)) { var dictionary = new XmlDictionary(); typeName = dictionary.Add(dataContractType.FullName); typeNamespace = dictionary.Add(dataContractType.Assembly.FullName); } return true; } public override Type ResolveName(string typeName, string typeNamespace, Type declaredType, DataContractResolver knownTypeResolver) { return knownTypeResolver.ResolveName(typeName, typeNamespace, declaredType, null) ?? Type.GetType(typeName + ", " + typeNamespace); } }

    Read the article

  • Exchange Web Service (EWS) call fails under ASP.NET but not a console application

    - by Vince Panuccio
    I'm getting an error when I attempt to connect to Exchange Web Services via ASP.NET. The following code works if I call it via a console application but the very same code fails when executed on a ASP.NET web forms page. Just as a side note, I am using my own credentials throughout this entire code sample. "When making a request as an account that does not have a mailbox, you must specify the mailbox primary SMTP address for any distinguished folder Ids." I thought I might be able to fix the issue by specifying an impersonated user. exchangeservice.ImpersonatedUserId = new ImpersonatedUserId(ConnectingIdType.SmtpAddress, "[email protected]"); But then I get a different error. "The account does not have permission to impersonate the requested user." The App Pool that the web application is running under is also my own account (same as the console application) so I have no idea what might be causing this issue. I am using .NET framework 3.5. Here is the code in full. var exchangeservice = new ExchangeService(ExchangeVersion.Exchange2010_SP1) { Timeout = 10000 }; var credentials = new System.Net.NetworkCredential("username", "pass", "domain"); exchangeservice.AutodiscoverUrl("[email protected]") FolderId rootFolderId = new FolderId(WellKnownFolderName.Inbox); var folderView = new FolderView(100) { Traversal = FolderTraversal.Shallow }; FindFoldersResults findFoldersResults = service.FindFolders(rootFolderId, folderView);

    Read the article

  • MSTest VS2010 - DeploymentItem copying files to different locations on different machines

    - by Jack
    I have found that DeploymentItem [TestClass(), DeploymentItem(@"TestData\")] is not copying my test data files to the same location when tests are built and run on different machines. The test data files are copied to the "bin\debug" directory in the test project on my machine, but on my friend's machine they are copied to "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out". The bin\debug directory on my machine can be obtained with the code: string appDirectory = Path.GetDirectoryNameSystem.Reflection.Assembly.GetExecutingAssembly().Location; and the same code will return "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" on my friends PC. This however isn't really the problem. The problem is that the test data files I have made have a folder structure, and this folder structure is only maintained on my machine when copied to bin\debug, whereas on my friends machine only the files are added to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" directory. This means that tests will pass on my machine and fail on his! Is there a way to ensure that DeploymentItem always copys to the bin\debug folder? Or a way to ensure that the folder structure will be retained when DeploymentItem copies the files to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" folder?

    Read the article

  • jqGrid (Delete row) - How to send additional POST data???

    - by ronanray
    Hi experts, I'm having problem with jqGrid delete mechanism as it only send "oper" and "id" parameters in form of POST data (id is the primary key of the table). The problem is, I need to delete a row based on the id and another column value, let's say user_id. How to add this user_id to the POST data??? I can summarize the issue as the following: How to get the cell value (user_id) of the selected row? AND, how to add that user_id to the POST data so it can be retrieved from the code behind where the actual delete process takes place. Sample codes: jQuery("#tags").jqGrid({ url: "subgrid.process.php, editurl: "subgrid.process.php?, datatype: "json", mtype: "POST", colNames:['id','user_id','status_type_id'], colModel:[{name:'id', index:'id', width:100, editable:true}, {name:'user_id', index:'user_id', width:200, editable:true}, {name:'status_type_id', index:'status_type_id', width:200} ], pager: '#pagernav2', rowNum:10, rowList:[10,20,30,40,50,100], sortname: 'id', sortorder: "asc", caption: "Test", height: 200 }); jQuery("#tags").jqGrid('navGrid','#pagernav2', {add:true,edit:false,del:true,search:false}, {}, {mtype:"POST",closeAfterAdd:true,reloadAfterSubmit:true}, // add options {mtype:"POST",reloadAfterSubmit:true}, // del options {} // search options ); Help....

    Read the article

  • faster implementation of sum ( for Codility test )

    - by Oscar Reyes
    How can the following simple implementation of sum be faster? private long sum( int [] a, int begin, int end ) { if( a == null ) { return 0; } long r = 0; for( int i = begin ; i < end ; i++ ) { r+= a[i]; } return r; } EDIT Background is in order. Reading latest entry on coding horror, I came to this site: http://codility.com which has this interesting programming test. Anyway, I got 60 out of 100 in my submission, and basically ( I think ) is because this implementation of sum, because those parts where I failed are the performance parts. I'm getting TIME_OUT_ERROR's So, I was wondering if an optimization in the algorithm is possible. So, no built in functions or assembly would be allowed. This my be done in C, C++, C#, Java or pretty much in any other. EDIT As usual, mmyers was right. I did profile the code and I saw most of the time was spent on that function, but I didn't understand why. So what I did was to throw away my implementation and start with a new one. This time I've got an optimal solution [ according to San Jacinto O(n) -see comments to MSN below - ] This time I've got 81% on Codility which I think is good enough. The problem is that I didn't take the 30 mins. but around 2 hrs. but I guess that leaves me still as a good programmer, for I could work on the problem until I found an optimal solution: Here's my result. I never understood what is those "combinations of..." nor how to test "extreme_first"

    Read the article

  • bind a WPF datagrid to a datatable

    - by Jim Thomas
    I have used the marvelous example posted at: http://www.codeproject.com/KB/WPF/WPFDataGridExamples.aspx to bind a WPF datagrid to a datatable. The source code below compiles fine; it even runs and displays the contents of the InfoWork datatable in the wpf datagrid. Hooray! But the WPF page with the datagrid will not display in the designer. I get an incomprehensible error instead on my design page which is shown at the end of this posting. I assume the designer is having some difficulty instantiating the dataview for display in the grid. How can I fix that? XAML Code: xmlns:local="clr-namespace:InfoSeeker" <Window.Resources> <ObjectDataProvider x:Key="InfoWorkData" ObjectType="{x:Type local:InfoWorkData}" /> <ObjectDataProvider x:Key="InfoWork" ObjectInstance="{StaticResource InfoWorkData}" MethodName="GetInfoWork" /> </Window.Resources> <my:DataGrid DataContext="{Binding Source={StaticResource InfoWork}}" AutoGenerateColumns="True" ItemsSource="{Binding}" Name="dataGrid1" xmlns:my="http://schemas.microsoft.com/wpf/2008/toolkit" /> C# Code: namespace InfoSeeker { public class InfoWorkData { private InfoTableAdapters.InfoWorkTableAdapter infoAdapter; private Info infoDS; public InfoWorkData() { infoDS = new Info(); infoAdapter = new InfoTableAdapters.InfoWorkTableAdapter(); infoAdapter.Fill(infoDS.InfoWork); } public DataView GetInfoWork() { return infoDS.InfoWork.DefaultView; } } } Error shown in place of the designer page which has the grid on it: An unhandled exception has occurred: Type 'MS.Internal.Permissions.UserInitiatedNavigationPermission' in Assembly 'PresentationFramework, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' is not marked as serializable. at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(RuntimeType type) at System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter) ...At:Ms.Internal.Designer.DesignerPane.LoadDesignerView()

    Read the article

  • Webservices error for dev.virtualearth.net/webservices/geocode

    - by Xaisoft
    I am getting the following stacktrace and have no idea what I am looking at and how to debug and fix it: Here is the error: Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Reference.svcmap: Failed to generate code for the service reference 'GeocodeService'. Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: Could not load file or assembly 'System.Xml, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e' or one of its dependencies. The system cannot find the file specified. XPath to Error Source: //wsdl:definitions[@targetNamespace='http//dev.virtualearth.net /webservices/v1/geocode/contracts']/wsdl:portType[@name='IGeocodeService'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net/webservices/v1/geocode/contracts'] /wsdl:portType[@name='IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='BasicHttpBinding_IGeocodeService'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='BasicHttpBinding_IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:service[@name='GeocodeService'] /wsdl:port[@name='BasicHttpBinding_IGeocodeService'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net/webservices/v1/geocode/contracts'] /wsdl:portType[@name='IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='CustomBinding_IGeocodeService'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='CustomBinding_IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:service[@name='GeocodeService'] /wsdl:port[@name='CustomBinding_IGeocodeService']

    Read the article

  • Namespace not found in MVC 3 Razor view

    - by PlTaylor
    I am adding a PagedList to my view and loosely following this Tutorial. I have installed the PagedList reference using Nuget, set up my controller as follows public ViewResult Index(int page = 1) { List<Part> model = this.db.Parts.ToList(); const int pageSize = 20; return View(model.ToPagedList(page, pageSize)); } And written my view with the following model at the top @model PagedList.IPagedList<RIS.Models.Part> When I run the page I get the following error Compiler Error Message: CS0246: The type or namespace name 'PagedList' could not be found (are you missing a using directive or an assembly reference?) Source Error: Line 27: Line 28: Line 29: public class _Page_Areas_Parts_Views_Part_Index_cshtml : System.Web.Mvc.WebViewPage<PagedList.IPagedList<RIS.Models.Part>> { The PagedList dll is being properly loaded in my controller because when I take it out of my view everything works as expected. The CopyLocal property is set to 'True' and I have tried including the namespace in the Views\Web.Config in my specific Area. What else can I do to make the View see the Namespace?

    Read the article

  • Visual Studio: Add Item / Add as link rather than just Add

    - by Pete d'Oronzio
    I'm new to visual studio, coming from Delphi. I have a directory tree full of .cs files (root is \Common). I also have a directory tree full of Applications (root is \Applications) Finally, I've got a tree full of Assemblies (root is \Assemblies) I'd like to keep my .cs files in the Common tree and all the environment voodoo (solutions, projects, settings, metadata, debug data, bin, etc.) in the Assmblies tree. So, for a simple example, I've got an assembly called PdMagic.Common.Math.dll. The Solution and project is located in \Assemblies\Common\Math. All of its source (.cs) files are in \Common\Math. (matrix.cs, trig.cs, mathtypes.cs, mathfuncs.cs, stats.cs, etc.) When I use Add Existing Item to add matrix.cs to my project, a copy of it is added to the \Assemblies\Common\Math folder. I just want to reference it. I don't want multiple copies laying around. I've tried Add Existing Item, and used the drop down to "Add link" rather than just "Add", and that seems to do what I want. Question: What is the "best practice" for this sort of thing? Do most people just put those .cs files all in the same folder as the project? Why isn't "Add link" the default? Thanks!

    Read the article

  • Why doesn't my NamedPipeServerStream wait??

    - by Frank Fella
    I'm working with a NamedPipeServerStream to communicate between two processes. Here is the code where I initialize and connect the pipe: void Foo(IHasData objectProvider) { Stream stream = objectProvider.GetData(); if (stream.Length > 0) { using (NamedPipeServerStream pipeServer = new NamedPipeServerStream("VisualizerPipe", PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous)) { string currentDirectory = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); string uiFileName = Path.Combine(currentDirectory, "VisualizerUIApplication.exe"); Process.Start(uiFileName); if(pipeServer.BeginWaitForConnection(PipeConnected, this).AsyncWaitHandle.WaitOne(5000)) { while (stream.CanRead) { pipeServer.WriteByte((byte)stream.ReadByte()); } } else { throw new TimeoutException("Pipe connection to UI process timed out."); } } } } private void PipeConnected(IAsyncResult e) { } But it never seems to wait. I constantly get the following exception: System.InvalidOperationException: Pipe hasn't been connected yet. at System.IO.Pipes.PipeStream.CheckWriteOperations() at System.IO.Pipes.PipeStream.WriteByte(Byte value) at PeachesObjectVisualizer.Visualizer.Show(IDialogVisualizerService windowService, IVisualizerObjectProvider objectProvider) I would think that after the wait returns everything should be ready to go. If I use pipeServer.WaitForConnection() everything works fine, but hanging the application if the pipe doesn't connect is not an option.

    Read the article

  • LoaderLock was detected, and turning off the warning does not help

    - by Scott M.
    I am trying to write an application that takes in sound from the default audio recording device on a computer. When running any code that accesses DirectX from my managed code i get this error: DLL 'C:\Windows\assembly\GAC\Microsoft.DirectX.DirectSound\1.0.2902.0__31bf3856ad364e35\Microsoft.DirectX.DirectSound.dll' is attempting managed execution inside OS Loader lock. Do not attempt to run managed code inside a DllMain or image initialization function since doing so can cause the application to hang. DevicesCollection coll = new DevicesCollection(); and Device d = new Device(DSoundHelper.DefaultCaptureDevice); and Capture c = new Capture(DSoundHelper.DefaultCaptureDevice); all cause the LoaderLock MDA to pop up and tell me there is a problem. I have scoured the internet (stackoverflow included) for solutions to this problem, but most people just say to turn off the warning, which does not work. When I turn off the warning, a generic ApplicationException is thrown, which is even less useful. I have seen the answers to this question as well, which didn't help because he said to remove the code that is causing the error. Others have said "fix your code." My questions are: how can I call any (preferably managed) DirectX code from C# without getting this error?

    Read the article

  • AllowPartiallyTrustedCallersAttribute exception - unit testing with moq

    - by vdh_ant
    Hi guys I am receiving the following exception when trying to run my unit tests using .net 4.0 under VS2010 with moq 3.1. Attempt by security transparent method 'SPPD.Backend.DataAccess.Test.Specs_for_Core.When_using_base.Can_create_mapper()' to access security critical method 'Microsoft.VisualStudio.TestTools.UnitTesting.Assert.IsNotNull(System.Object)' failed. Assembly 'SPPD.Backend.DataAccess.Test, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is marked with the AllowPartiallyTrustedCallersAttribute, and uses the level 2 security transparency model. Level 2 transparency causes all methods in AllowPartiallyTrustedCallers assemblies to become security transparent by default, which may be the cause of this exception. The test I am running is really straight forward and looks something like the following: [TestMethod] public void Can_create_mapper() { this.SetupTest(); var mockMapper = new Moq.Mock<IMapper>().Object; this._Resolver.Setup(x => x.Resolve<IMapper>()).Returns(mockMapper).Verifiable(); var testBaseDa = new TestBaseDa(); var result = testBaseDa.TestCreateMapper<IMapper>(); Assert.IsNotNull(result); //<<< THROWS EXCEPTION HERE Assert.AreSame(mockMapper, result); this._Resolver.Verify(); } I have no idea what this means and I have been looking around and have found very little on the topic. The closest reference I have found is this http://dotnetzip.codeplex.com/Thread/View.aspx?ThreadId=80274 but its not very clear on what they did to fix it... Anyone got any ideas?

    Read the article

  • Spring Security RememberMe Services with Session Cookie

    - by Jarrod
    I am using Spring Security's RememberMe Services to keep a user authenticated. I would like to find a simple way to have the RememberMe cookie set as a session cookie rather than with a fixed expiration time. For my application, the cookie should persist until the user closes the browser. Any suggestions on how to best implement this? Any concerns on this being a potential security problem? The primary reason for doing so is that with a cookie-based token, any of the servers behind our load balancer can service a protected request without relying on the user's Authentication to be stored in an HttpSession. In fact, I have explicitly told Spring Security to never create sessions using the namespace. Further, we are using Amazon's Elastic Load Balancing, and so sticky sessions are not supported. NB: Although I am aware that as of Apr. 08, Amazon now supports sticky sessions, I still do not want to use them for a handful of other reasons. Namely that the untimely demise of one server would still cause the loss of sessions for all users associated with it. http://aws.amazon.com/about-aws/whats-new/2010/04/08/support-for-session-stickiness-in-elastic-load-balancing/

    Read the article

  • DataView.RowFilter Vs DataTable.Select() vs DataTable.Rows.Find()

    - by Aseem Gautam
    Considering the code below: Dataview someView = new DataView(sometable) someView.RowFilter = someFilter; if(someView.count > 0) { …. } Quite a number of articles which say Datatable.Select() is better than using DataViews, but these are prior to VS2008. Solved: The Mystery of DataView's Poor Performance with Large Recordsets Array of DataRecord vs. DataView: A Dramatic Difference in Performance Googling on this topic I found some articles/forum topics which mention Datatable.Select() itself is quite buggy(not sure on this) and underperforms in various scenarios. On this(Best Practices ADO.NET) topic on msdn it is suggested that if there is primary key defined on a datatable the findrows() or find() methods should be used insted of Datatable.Select(). This article here (.NET 1.1) benchmarks all the three approaches plus a couple more. But this is for version 1.1 so not sure if these are valid still now. Accroding to this DataRowCollection.Find() outperforms all approaches and Datatable.Select() outperforms DataView.RowFilter. So I am quite confused on what might be the best approach on finding rows in a datatable. Or there is no single good way to do this, multiple solutions exist depending upon the scenario?

    Read the article

  • JQgrid - Get Row Number instead of ID

    - by mariojjsimoes
    Hello, all, I am creating a JQGrid from a database table that does not contain a single field primary key. Therefore, the field i am supplying as id is not unique and the same one exists in several rows. Because of this, when passing a reference to the data with ondblClickRow to a function external to the grid i need to use the rownumber and not the id. To test, I'm using ondblClickRow: function(id){alert($("#grid1").getInd('rowid'));}, , and i should be getting and alert with the row number, except that it isn't working. I've been over the documentation and can't understand what i am doing wrong... Any help would be greatly appreciated! Thanks in advance, Mario. Bellow is my full grid: jQuery(document).ready(function(){ var mygrid = jQuery("#grid1").jqGrid({ datatype: 'xmlstring', datastr : grid1RsXML, width: 1024, height: 500, colNames:['DEVICE_ID','JOB_SIZE_IN_BYTES', 'USER_NAME','HOST_NAME','DAY_OF_WEEK','JOB_ID'], colModel:[ {name:'DEVICE_ID',index:'DEVICE_ID', width:55, sortable:true}, {name:'JOB_SIZE_IN_BYTES',index:'JOB_SIZE_IN_BYTES', width:40, sortable:true}, {name:'USER_NAME',index:'USER_NAME', width:60, sortable:true}, {name:'HOST_NAME',index:'HOST_NAME', width:50,align:"right", sortable:true}, {name:'DAY_OF_WEEK',index:'DAY_OF_WEEK', width:10, sortable:true}, {name:'JOB_ID',index:'JOB_ID', width:30, sortable:true} ], rowNum:1000, autowidth: true, //rowList:[10,20,30], rowList:[1], pager: '#grid1Pager', sortname: 'DEVICE_ID', viewrecords: true, rownumbers: true, sortorder: "desc", sortable: true, gridview : true, xmlReader: { root : "recordset", row: "record", repeatitems: false, id: "DEVICE_ID" }, caption:"All Jobs - Double Click for detailed history", ondblClickRow: function(id){alert($("#grid1").getInd('rowid'));}, toolbar: [true,"top"], url: grid1RsXML });

    Read the article

  • How to change StartupUri of WPF Application?

    - by Akash Kava
    I am trying to modify App.cs and load the WPF XAML files from code behind but its not working as it should. No matter whatever I try to set as StartupUri it doesnt start, the program quits after this. public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); LoginDialog dlg = new LoginDialog(); if (dlg.ShowDialog() != true) return; switch (dlg.ChoiceApp) { case ChoiceApp.CustomerEntry: StartupUri = new Uri("/MyApp;component/Forms/CustomerEntry.xaml", UriKind.Relative); break; case ChoiceApp.VendorEntry: StartupUri = new Uri("/MyApp;component/Forms/VendorEntry.xaml", UriKind.Relative); break; } } } Now I even did trace and found out that LoginDialog is working correctly and is returning values correctly but setting "StartupUri" does not work. I checked in reverse assembly that DoStartup method of App gets called after OnStartup, so technically my StartupUri must load, but it doesnt, in App.xaml startup uri is not at all defined. Note: Bug Confirmed I noticed that ShowDialog sets Application.MainWindow and when dialog ends, it sets it back to null, and because of this setting StartupUri does not work after calling Modal Dialog in OnStartup or Startup event. There is no error or exception about invalid uri or anything like that. This method works without DialogBox being called in Startup event or OnStartup, i think calling showdialog on this method causes something like its mainwindow being set to expired window and it shuts down after this.

    Read the article

  • Return template as string - Django

    - by Ninefingers
    Hi All, I'm still not sure this is the correct way to go about this, maybe not, but I'll ask anyway. I'd like to re-write wordpress (justification: because I can) albeit more simply myself in Django and I'm looking to be able to configure elements in different ways on the page. So for example I might have: Blog models A site update message model A latest comments model. Now, for each page on the site I want the user to be able to choose the order of and any items that go on it. In my thought process, this would work something like: class Page(models.Model) Slug = models.CharField(max_length=100) class PageItem(models.Model) Page = models.ForeignKey(Page) ItemType = models.CharField(max_length=100) InstanceNum = models.IntegerField() # all models have primary keys. Then, ideally, my template would loop through all the PageItems in a page which is easy enough to do. But what if my page item is a site update as opposed to a blog post? Basically, I am thinking I'd like to pull different item types back in different orders and display them using the appropriate templates. Now, I thought one way to do this would be to, in views.py, to loop through all of the objects and call the appropriate view function, return a bit of html as a string and then pipe that into the resultant template. My question is - is this the best way to go about doing things? If so, how do I do it? If not, which way should I be going? I'm pretty new to Django so I'm still learning what it can and can't do, so please bear with me. I've checked SO for dupes and don't think this has been asked before...

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >