Search Results

Search found 6343 results on 254 pages for 'dll injection'.

Page 214/254 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • How do you manage extensibility in your multi-tenant systems?

    - by Brian MacKay
    I've got a few big web based multi-tenant products now, and very soon I can see that there will be a lot of customizations that are tenant specific. An extra field here or there, maybe an extra page or some extra logic in the middle of a workflow - that sort of thing. Some of these customizations can be rolled into the core product, and that's great. Some of them are highly specific and would get in everyone else's way. I have a few ideas in mind for managing this, but none of them seem to scale well. The obvious solution is to introduce a ton of client-level settings, allowing various 'features' to be enabled on per-client basis. The downside with that, of course, is massive complexity and clutter. You could introduce a truly huge number of settings, and over time various types of logic (presentation, business) could get way out of hand. Then there's the problem of client-specific fields, which begs for something cleaner than just adding a bunch of nullable fields to the existing tables. So what are people doing to manage this? Force.com seems to be the master of extensibility; obviously they've created a platform from the ground up that is super extensible. You can add on to almost anything with their web-based UI. FogBugz did something similiar where they created a robust plugin model that, come to think of it, might have actually been inspired by Force. I know they spent a lot of time and money on it and if I'm not mistaken the intention was to actually use it internally for future product development. Sounds like the kind of thing I could be tempted to build but probably shouldn't. :) Is a massive investment in pluggable architecture the only way to go? How are you managing these problems, and what kind of results are you seeing? EDIT: It does look as though FogBugz handled the problem by building a fairly robust platform and then using that to put together their screens. To extend it you create a DLL containing classes that implement interfaces like ISearchScreenGridColumn, and that becomes a module. I'm sure it was tremendously expensive to build considering that they have a large of devs and they worked on it for months, plus their surface area is perhaps 5% of the size of my application. Right now I am seriously wondering if Force.com is the right way to handle this. And I am a hard core ASP.Net guy, so this is a strange position to find myself in.

    Read the article

  • What Can We Learn About Software Security by Going to the Gym

    - by Nick Harrison
    There was a recent rash of car break-ins at the gym. Not an epidemic by any stretch, probably 4 or 5, but still... My gym used to allow you to hang your keys from a peg board at the front desk. This way you could come to the gym dressed to work out, lock your valuables in your car, and not have anything to worry about. Ignorance is bliss. The problem was that anyone who wanted to could go pick up your car keys, click the unlock button and find your car. Once there, they could rummage through your stuff and then walk back in and finish their workout as if nothing had happened. The people doing this were a little smatter then the average thief and would swipe some but not all of your cash leaving everything else in place. Most thieves would steal the whole car and be busted more quickly. The victims were unaware that anything had happened for several days. Fortunately, once the victims realized what had happened, the gym was still able to pull security tapes and find out who was misbehaving. All of the bad guys were busted, and everyone can now breathe a sigh of relieve. It is once again safe to go to the gym. Except there was still a fundamental problem. Putting your keys on a peg board by the front door is just asking for bad things to happen. One person got busted exploiting this security flaw. Others can still be exploiting it. In fact, others may well have been exploiting it and simply never got caught. How long would it take you to realize that $10 was missing from your wallet, if everything else was there? How would you even know when it went missing? Would you go to the front desk and even bother to ask them to review security tapes if you were only missing a small amount. Once highlighted, it is easy to see how commonly such vulnerability may have been exploited. So the gym did the very reasonable precaution of removing the peg board. To me the most shocking part of this story is the resulting uproar from gym members losing the convenient key peg. How dare they remove the trusted peg board? How can I work out now, I have to carry my keys from machine to machine? How can I enjoy my workout with this added inconvenience? This all happened a couple of weeks ago, and some people are still complaining. In light of the recent high profile hacking, there are a couple of parallels that can be drawn. Many web sites are riddled with vulnerabilities are crazy and easily exploitable as leaving your car keys by the front door while you work out. No one ever considered thanking the people who were swiping these keys for pointing out the vulnerability. Without a hesitation, they had their gym memberships revoked and are awaiting prosecution. The gym did recognize the vulnerability for what it is, and closed up that attack vector. What can we learn from this? Monitoring and logging will not prevent a crime but they will allow us to identify that a crime took place and may help track down who did it. Once we find a security weakness, we need to eliminate it. We may never identify and eliminate all security weaknesses, but we cannot allow well known vulnerabilities to persist in our system. In our case, we are not likely to meet resistance from end users. We are more likely to meet resistance from stake holders, product owners, keeper of schedules and budgets. We may meet resistance from integration partners, co workers, and third party vendors. Regardless of the source, we will see resistance, but the weakness needs to be dealt with. There is no need to glorify a cracker for bringing to light a security weakness. Regardless of their claimed motives, they are not heroes. There is also no point in wasting time defending weaknesses once they are identified. Deal with the weakness and move on. In may be embarrassing to find security weaknesses in our systems, but it is even more embarrassing to continue ignoring them. Even if it is unpopular, we need to seek out security weaknesses and eliminate them when we find them. http://www.sans.org has put together the Common Weakness Enumeration http://cwe.mitre.org/ which lists out common weaknesses. The site navigation takes a little getting used to, but there is a treasure trove here. Here is the detail page for SQL Injection. It clearly states how this can be exploited, in case anyone doubts that the weakness should be taken seriously, and more importantly how to mitigate the risk.

    Read the article

  • How Microsoft Market DotNet?

    - by Fendy
    I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as "because dot net is OOP, or dot net is dll-embedable, etc". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)

    Read the article

  • Configuring the iPlanet as web tier for Oracle WebCenter Content (UCM)

    - by Adao Junior
    If you are looking for configure the iPlanet as Web server/proxy to use with the Oracle WebCenter Content, you probably won’t found an specific documentation for that or will found some old complex notes related to the old 10gR3. This post will help you out with few simple steps. That’s the diagram of the test scenario, considering that you will deploy in production in an cluster environment. First you need the software, for our scenario you will need: - Oracle iPlanet Web Server 7.0.15+ (Installed) - Oracle WebCenter Content 11gR1 PS5 (Installed) - Oracle WebLogic Web Server Plugins 11g (1.1) - Supported JDK (Using Oracle Java JDK 7u4 for the test) - Certified Client OS - Certified Server OS (Using Oracle Solaris 11 for the test) - Certified Database (Using Oracle Database 11.2.0.3 for the test) Then the configuration: - Download the latest plugin: http://www.oracle.com/technetwork/middleware/ias/downloads/wls-plugins-096117.html - Extract the WLSPlugin11g-iPlanet7.0 in some folder, like <iPlanet_Home>/plugins/wls11 - Include the plugin reference to the magnus.conf: If Unix (Solaris or Linux), include the line: Init fn="load-modules" shlib="/apps/oracle/WebServer7/plugins/wls11/lib/mod_wl.so" If Windows, Include the line:        Init fn="load-modules" shlib="D:\\oracle\\WebServer7\\plugins\\wls11\\lib\\mod_wl.dll" - Include the proxy reference to the obj.conf of each instance: <Object name="weblogic" ppath="*/cs/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_dav/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_ocsh/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/adfAuthentication/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object> If you are using an single node setup, change the Service fn=…. line to something like: Service fn="wl-proxy" WebLogicHost=<wcc-server> WebLogicPort=16200 With these configurations, your should have the WebCenter Content UI working with the iPlanet, test it. [http://<web-server>/cs/] With the UI working, the last step is to configure the WebDav: - Go to the iPlanet Admin Console (usually https://<web-server>:8989) - Go to Configurations >> [instance] >> Virtual Servers >> [Virtual Server] >> WebDAV: - Click New - Populate the URI with /cs/idcplg/webdav: - Select “Anyone (No Authentication)”, the wc Content will take care of the security: This will allow you to use the WebDav feature and the Desktop Integration Suite, including double-byte characters. Anothers iPlanet tunes could be done, I can cover in the next post related to the iPlanet. Cross-posted on the ContentrA.com Blog Related posts:  - Using a Web Proxy Server with WebCenter Family

    Read the article

  • WebAPI and MVC4 and OData

    - by Aligned
    I was looking closer into WebAPI, specificially how to use OData to avoid writing GetCustomerByCustomerId(int id) methods all over the place. I had problems just returning IQueryable<T> as some sites suggested in the WebpAPI (Assembly System.Web.Http.dll, v4.0.0.0).  I think things changed in the release version and the blog posts are still out of date. There is no [Queraable] as the answer to this question suggests. Once I get WebAPI.Odata Nuget package, and added the [Queryable] to the method http://localhost:57146/api/values/?$filter=Id%20eq%201 worked (don’t forget the ‘$’). Now the main question is whether I should do this and how to stop logged in users from sniffing the url and getting data for other users. I John V. Peterson has a post on securing WebAPI with headers and intercepting the call at that point. He had an update to use HttpMessageHandlers instead. I think I’ll use this to force the call to contain some kind of unique code for the user, but I’m still thinking about this. I will not expose this to the public, just to my calls with-in my Forms Authentication areas. Other links: http://robbincremers.me/2012/02/16/building-and-consuming-rest-services-with-asp-net-web-api-and-odata-support/ ~lots of good information John V Peterson example: https://github.com/johnvpetersen/ASPWebAPIExample ~ all data access goes through the WebApi and the web client doesn’t have a connection string ~ There is code library for calling the WebApi from MVC using the HttpClient. It’s a great starting point http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx ~ Beta (9/18/2012) Nuget package to help with what I want to do? ~ has a sample code project with examples http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx http://blogs.msdn.com/b/alexj/archive/2012/08/21/web-api-queryable-current-support-and-tentative-roadmap.aspx http://stackoverflow.com/questions/10885868/asp-net-mvc4-rc-web-api-odata-filter-not-working-with-iqueryable JSON, pass the correct format in the header (Accept: application/json). $format=JSON doesn’t appear to be working. Async methods built into WebApi! Look for the GetAsync methods.

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • Gettings Terms asscoiated to a Specific list item

    - by Gino Abraham
    I had a fancy requirement where i had to get all tags associated to a document set in a document library. The normal tag could webpart was not working when i add it to the document set home page, so planned a custom webpart. Was checking in net to find a straight forward way to achieve this, but was not lucky enough to get something. Since i didnt get any samples in net, i looked into Microsoft.Sharerpoint.Portal.Webcontrols and found a solution.The socialdataframemanager control in 14Hive/Template/layouts/SocialDataFrame.aspx directed me to the solution. You can get the dll from ISAPI folder. Following Code snippet can get all Terms associated to the List Item given that you have list name and id for the list item. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.Office.Server.SocialData; namespace TagChecker { class Program { static void Main(string[] args) { // Your site url string siteUrl = http://contoso; // List Name string listName = "DocumentLibrary1"; // List Item Id for which you want to get all terms int listItemId = 35; using (SPSite site = new SPSite(siteUrl)) { using(SPWeb web = site.OpenWeb()) { SPListItem listItem = web.Lists[listName].GetItemById(listItemId); string url = string.Empty; // Based on the list type the url would be formed. Code Sniffed from Micosoft dlls :) if (listItem.ParentList.BaseType == SPBaseType.DocumentLibrary) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Url.TrimStart(new char[] { '/' }); } else if (SPFileSystemObjectType.Folder == listItem.FileSystemObjectType) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Folder.Url.TrimStart(new char[] { '/' }); } else { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.ParentList.Forms[PAGETYPE.PAGE_DISPLAYFORM].Url.TrimStart(new char[] { '/' }) + "?ID=" + listItem.ID.ToString(); } SPServiceContext serviceContext = SPServiceContext.GetContext(site); Uri uri = new Uri(url); SocialTagManager mgr = new SocialTagManager(serviceContext); SocialTerm[] terms = mgr.GetTerms(uri); foreach (SocialTerm term in terms) { Console.WriteLine(term.Term.Labels[0].Value ); } } } Console.Read(); } } } Reference dlls added are Microsoft.Sharepoint , Microsoft.Sharepoint.Taxonomy, Microsoft.office.server, Microsoft.Office.Server.UserProfiles from ISAPI folder. This logic can be used to make a custom tag cloud webpart by taking code from OOB tag cloud, so taht you can have you webpart anywhere in the site and still get Tags added to a specifc libdary/List. Hope this helps some one.

    Read the article

  • How to Send the Contents of the Clipboard to a Text File via the Send to Menu

    - by Jason Faulkner
    We have previously covered how to send the contents of a text file to the Windows Clipboard with a simple Send To shortcut, but what if you want to do the opposite? That is: send the contents of the clipboard to a text file with a simple shortcut. No problem. Here’s how. Copy the ClipOut Utility While Windows offers the command line tool ‘clip’ as a way to direct console output to the clipboard, it does not have a tool to direct the clipboard contents to the console. To do this, we are going to use a small utility named ClipOut (download link at the bottom). Simply download and extract this file to a location in your Windows PATH variable (if you don’t know what this means, just extract the EXE to your C:\Windows folder) and you are ready to go. Add the Send To Shortcut Open your Send To folder location by going to Run > shell:sendto Create a new shortcut with the command: CMD /C ClipOut > Note the above command will overwrite the contents of the selected file. If you would like to append to the contents of the selected file, use this command instead: CMD /C ClipOut >> Of course, you could make shortcuts for both. Give a descriptive name to the shortcut. You’re finished. Using this shortcut will now send the text contents copied to your Windows Clipboard to the selected file. It is important to note that the ClipOut tool only supports outputting text. If you had binary data copied to your clipboard, then the output would be empty. Changing the Icon By default, the icon for the shortcut will appear as a command prompt, but you can easily change this by editing the properties of the shortcut and clicking the Change Icon button. We used an icon located in “%SystemRoot%\System32\shell32.dll”, but any icon of your liking will do. As an additional tweak, you can set the properties of the shortcut to run minimized. This will prevent the command window from “blinking” when the send to command is run (instead it will blink in your taskbar, which is hardly noticeable). Links Download ClipOut Utility     

    Read the article

  • Building a Repository Pattern against an EF 5 EDMX Model - Part 1

    - by Juan
    I am part of a year long plus project that is re-writing an existing application for a client.  We have decided to develop the project using Visual Studio 2012 and .NET 4.5.  The project will be using a number of technologies and patterns to include Entity Framework 5, WCF Services, and WPF for the client UI.This is my attempt at documenting some of the successes and failures that I will be coming across in the development of the application.In building the data access layer we have to access a database that has already been designed by a dedicated dba. The dba insists on using Stored Procedures which has made the use of EF a little more difficult.  He will not allow direct table access but we did manage to get him to allow us to use Views.  Since EF 5 does not have good support to do Code First with Stored Procedures, my option was to create a model (EDMX) against the existing database views.   I then had to go select each entity and map the Insert/Update/Delete functions to their respective stored procedure. The next step after I had completed mapping the stored procedures to the entities in the EDMX model was to figure out how to build a generic repository that would work well with Entity Framework 5.  After reading the blog posts below, I adopted much of their code with some changes to allow for the use of Ninject for dependency injection.http://www.tcscblog.com/2012/06/22/entity-framework-generic-repository/ http://www.tugberkugurlu.com/archive/generic-repository-pattern-entity-framework-asp-net-mvc-and-unit-testing-triangle IRepository.cs public interface IRepository : IDisposable where T : class { void Add(T entity); void Update(T entity, int id); T GetById(object key); IQueryable Query(Expression> predicate); IQueryable GetAll(); int SaveChanges(); int SaveChanges(bool validateEntities); } GenericRepository.cs public abstract class GenericRepository : IRepository where T : class { public abstract void Add(T entity); public abstract void Update(T entity, int id); public abstract T GetById(object key); public abstract IQueryable Query(Expression> predicate); public abstract IQueryable GetAll(); public int SaveChanges() { return SaveChanges(true); } public abstract int SaveChanges(bool validateEntities); public abstract void Dispose(); } One of the issues I ran into was trying to do an update. I kept receiving errors so I posted a question on Stack Overflow http://stackoverflow.com/questions/12585664/an-object-with-the-same-key-already-exists-in-the-objectstatemanager-the-object and came up with the following hack. If someone has a better way, please let me know. DbContextRepository.cs public class DbContextRepository : GenericRepository where T : class { protected DbContext Context; protected DbSet DbSet; public DbContextRepository(DbContext context) { if (context == null) throw new ArgumentException("context"); Context = context; DbSet = Context.Set(); } public override void Add(T entity) { if (entity == null) throw new ArgumentException("Cannot add a null entity."); DbSet.Add(entity); } public override void Update(T entity, int id) { if (entity == null) throw new ArgumentException("Cannot update a null entity."); var entry = Context.Entry(entity); if (entry.State == EntityState.Detached) { var attachedEntity = DbSet.Find(id); // Need to have access to key if (attachedEntity != null) { var attachedEntry = Context.Entry(attachedEntity); attachedEntry.CurrentValues.SetValues(entity); } else { entry.State = EntityState.Modified; // This should attach entity } } } public override T GetById(object key) { return DbSet.Find(key); } public override IQueryable Query(Expression> predicate) { return DbSet.Where(predicate); } public override IQueryable GetAll() { return Context.Set(); } public override int SaveChanges(bool validateEntities) { Context.Configuration.ValidateOnSaveEnabled = validateEntities; return Context.SaveChanges(); } #region IDisposable implementation public override void Dispose() { if (Context != null) { Context.Dispose(); GC.SuppressFinalize(this); } } #endregion IDisposable implementation } At this point I am able to start creating individual repositories that are needed and add a Unit of Work.  Stay tuned for the next installment in my path to creating a Repository Pattern against EF5.

    Read the article

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • MVVM and Animations in Silverlight

    - by Aligned
    I wanted to spin an icon to show progress to my user while some content was downloading. I'm using MVVM (aren't you) and made a satisfactory Storyboard to spin the icon. However, it took longer than expected to trigger that animation from my ViewModel's property.I used a combination of the GoToState action and the DataTrigger from the Microsoft.Expression.Interactions dll as described here.Then I had problems getting it to start until I found this approach that saved the day. The DataTrigger didn't bind right away because "it doesn’t change visual state on load is because the StateTarget property of the GotoStateAction is null at the time the DataTrigger fires.". Here's my XAML, hopefully you can fill in the rest.<Image x:Name="StatusIcon" AutomationProperties.AutomationId="StatusIcon" Width="16" Height="16" Stretch="Fill" Source="inProgress.png" ToolTipService.ToolTip="{Binding StatusTooltip}"> <i:Interaction.Triggers> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="True" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Downloading" /> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="False" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Complete"/> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> </i:Interaction.Triggers> <Image.Projection> <PlaneProjection/> </Image.Projection> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="VisualStateGroup"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="0" To="Downloading"> <VisualTransition.GeneratedEasingFunction> <QuadraticEase EasingMode="EaseInOut"/> </VisualTransition.GeneratedEasingFunction> <Storyboard RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Projection).(PlaneProjection.RotationZ)" Storyboard.TargetName="StatusIcon"> <EasingDoubleKeyFrame KeyTime="0:0:1.5" Value="-360"/> <EasingDoubleKeyFrame KeyTime="0:0:2" Value="-360"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualTransition> <VisualTransition From="Downloading" GeneratedDuration="0"/> </VisualStateGroup.Transitions> <VisualState x:Name="Downloading"/> <VisualState x:Name="Complete"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups></Image>MVVMAnimations.zip

    Read the article

  • PowerShell Import DnsShell Module

    - by krousemw
    So here's the list of available modules in this directory. As you can see DnsShell is there. PS C:\windows\system32> Get-Module -ListAvailable Directory: C:\windows\system32\WindowsPowerShell\v1.0\Modules ModuleType Name ExportedCommands ---------- ---- ---------------- Manifest ActiveDirectory {Get-ADRootDSE, New-ADObject, Rename- ADObject, Move-ADObject...} Manifest AppLocker {Set-AppLockerPolicy, Get-AppLockerPolicy, Test-AppLockerPolicy, Get-AppLo... Manifest BitsTransfer {Add-BitsFile, Remove-BitsTransfer, Complete-BitsTransfer, Get-BitsTransfe... Manifest CimCmdlets {Get-CimAssociatedInstance, Get-CimClass, Get-CimInstance, Get-CimSession...} Binary DnsShell Script ISE {New-IseSnippet, Import-IseSnippet, Get- IseSnippet} Manifest Microsoft.PowerShell.Diagnostics {Get-WinEvent, Get-Counter, Import-Counter, Export-Counter...} Manifest Microsoft.PowerShell.Host {Start-Transcript, Stop-Transcript} Manifest Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear- ItemProperty, Join-Path...} Manifest Microsoft.PowerShell.Security {Get-Acl, Set-Acl, Get-PfxCertificate, Get-Credential...} Manifest Microsoft.PowerShell.Utility {Format-List, Format-Custom, Format-Table, Format-Wide...} Manifest Microsoft.WSMan.Management {Disable-WSManCredSSP, Enable- WSManCredSSP, Get-WSManCredSSP, Set-WSManQui... Script PSDiagnostics {Disable-PSTrace, Disable- PSWSManCombinedTrace, Disable-WSManTrace, Enable... Binary PSScheduledJob {New-JobTrigger, Add-JobTrigger, Remove-JobTrigger, Get-JobTrigger...} Manifest PSWorkflow {New-PSWorkflowExecutionOption, New-PSWorkflowSession, nwsn} Manifest PSWorkflowUtility Invoke-AsWorkflow Manifest TroubleshootingPack {Get-TroubleshootingPack, Invoke-TroubleshootingPack} When I run the command to Import-Module DnsShell, I get this error and I dont know why.. PS C:\windows\system32> Import-Module DnsShell Import-Module : Could not load file or assembly 'file:///C:\windows\system32\WindowsPowerShell\v1.0\Modules\DnsShell\DnsShell.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) At line:1 char:1 + Import-Module DnsShell + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Import-Module], FileLoadException + FullyQualifiedErrorId : System.IO.FileLoadException,Microsoft.PowerShell.Commands.ImportModuleCommand Note: I would have posted pictures but I needed a rep of at least 10 in serverfault

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • Using unixODBC to connect to Oracle server

    - by Paul
    I am trying to configure our web server (RHEL 5.4 x86) to connect to an Oracle database using unixODBC. I have installed unixODBC-2.2.11-7.1.1, which yum tells me is the latest version. I have also installed the Oracle InstantClient 11.2 and the Oracle InstantClient ODBC library. I have symlinked the all the .so files in /usr/lib/oracle/11.2/client/lib to /usr/lib. I have set $LD_LIBRARY_PATH to /usr/lib/, $ORACLE_HOME to /usr/lib/oracle and $TNS_ADMIN to the directory containing my (valid) Tnsnames.ora file. Here are the contents of my /etc/odbcinst.ini file: [Oracle] Description = Oracle ODBC Connection Driver = /usr/lib/libsqora.so.11.1 Setup = FileUsage = and my /etc/odbc.ini file: [Oracle] Application Attributes = T Attributes = W BatchAutocommitMode = IfAllSuccessful CloseCursor = F DisableDPM = F DisableMTS = T Driver = Oracle EXECSchemaOpt = EXECSyntax = T Failover = T FailoverDelay = 10 FailoverRetryCount = 10 FetchBufferSize = 64000 ForceWCHAR = F Lobs = T Longs = T MetadataIdDefault = F QueryTimeout = T ResultSets = T ServerName = //<host>:<port>/<db> SQLGetData extensions = F Translation DLL = Translation Option = 0 UserID = (ServerName has been edited...host, port, and db are actually there, and correct) When I run isql I get $ isql -v Oracle isql: symbol lookup error: /usr/lib/libsqora.so.11.1: undefined symbol: SQLGetPrivateProfileStringW And running dltest gives me $ dltest Oracle SQLConnect [dltest] ERROR dlopen: Oracle: cannot open shared object file: No such file or directory If anyone has any insights I would be grateful, I've been trying to get this to connect for about 5 hours now... I am going home for the night, but will gladly provide more details, if necessary, tomorrow morning, to anyone willing to help...

    Read the article

  • IIS 7.5 How to disable "Verify File Exists" for siteminder handler

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • Remote App Authentication Error (Code:0x507)

    - by CMK
    Hi I'm trying to get RDP services running with Windows 2008 R2. I'm at a WINXP SP3 client that was modified to run RDP with NLA. When I start the client connect to the local DC and get an authentification error (Code 0x507). I've already done the following: • Server Setup to run as a standalone local "DC" to provide Terminal services to a single application. Remote Desktop Session Host CAL License is running & operational, RD Gateway Manager w/ Local Server RAP & CAP running NLA & operational etc..... Server has NLA & temporary use of port 3389 (which is directly connected to and accessible from the internet (I am planning to change the port to 443, but want to get the current system running first). • XP Client(s): RDP-Version on win xp clients is 6.1 If had SP2, then added SP3 and edited the registry settings to allow NLA, by: Click Start, click Run, type regedit, and then press ENTER. In the navigation pane, locate and then click the following registry subkey: HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Control\Lsa In the details pane, right-click Security Packages, and then click Modify. In the Value data box, type tspkg. Leave any data that is specific to other SSPs, and then click OK. In the navigation pane, locate and then click the following registry subkey: HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Control\SecurityProviders In the details pane, right-click SecurityProviders, and then click Modify. In the Value data box, type credssp.dll. Leave any data that is specific to other SSPs, and then click OK. Exit Registry Editor. Restart the computer.

    Read the article

  • Can't delete C:\Config.Msi\75ce84f.rbf

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf It's not causing any problems but it's a mystery I'd like to solve, preferably before the next reboot because it's scheduled for deletion then (see pendmoves). it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • Debugging UI Problems in IE8 (Was IE8 on Windows 7 Authentication Mess)

    - by alharaka
    UPDATE: I think the real question I need to ask here is: how does a technician debug UI problems with Internet Explorer, and not HTML rendering issues that have pretty good tools? I am aware of the SysInternals tools and others mentioned below, but maybe I am not harnessing their power properly. Someone else in the TechNet forum I mentioned had a similar issue. Again, I have lots of data, I am not sure how to properly interpret it. ORIGINAL POST: So I tried the venerable Technet Forums to solve this isse. In short, the Windows Security dialog has no place to put credentials, rendering pretty much useless. This happens to apply for a whole bunch of our intranet websites, and only a select number of users with a few laptops have this problem. It ends up looking like this. Things I have tried so far: Disabling local Group Policy (not domain connected) Disabling local Security Policy Resetting IE settings A few system restores Re-registering a bunch of IE DLL's and all other steps here Reinstalling IE8 (dism /online /disable-feature /featurename:"internet-explorer-optional-x86, reboot, dism /online /enable-feature /featurename:"internet-explorer-optional-x86, and reboot) And SFC scan, which found nothing Still, nothing. Not only am I fed up, but I have begun to really work with APIExplorer and Procmon as mentioned in the Technet original because I want to know WHAT is happening, not just fix it. Any thoughts?

    Read the article

  • Windows 7 Sysprep Default User

    - by Demonwolf
    I seem to be having a problem with implementing my sysprep. I have been playing with Windows 7, WAIK, Server 2008 R2 and various other things. I managed to create a WIM with everything I need installed and I have worked out the autounattend.xml. I now have a Windows 7 64-bit complete unattended install from a USB device. It has all my programs, setting and everything done except one thing - the default profile set up 100% correctly. I have created a mostly set up default profile. I booted into audit mode, customized the Administrator account (mostly anyway) and then used sysprep with an unattend.xml file containing the copyprofile=true command. The file was set up with the WSIM and does not contain any extra info. This all works wonderfully. I recreated the WIM and all was good. I then decided to move the default location of the visible stuff in the user profile (Documents, Music, Pictures etc.) without changing the location of Appdata or other hidden folders. This is where things went a little... wrong. I went to the user folder (generally has the User name) with all the other folders in it. I right clicked on My Documents, found the location tab and changed it to M:\Documents. Now if I run sysprep /generalize /oobe /reboot /unattend:unattend.xml it starts the generalise... then spits out a fatal error and goes no further. The setuperr.log contains the following errors: 2011-08-18 23:21:43, Error [0x0f0043] SYSPRP WinMain:The sysprep dialog box returned FALSE 2011-08-18 23:31:57, Error [0x0f0082] SYSPRP LaunchDll:Failure occurred while executing 'C:\Windows\System32\slc.dll,SLReArmWindows', returned error code -1073425657 2011-08-18 23:31:57, Error [0x0f0070] SYSPRP RunExternalDlls:An error occurred while running registry sysprep DLLs, halting sysprep execution. dwRet = -1073425657 2011-08-18 23:31:57, Error [0x0f00a8] SYSPRP WinMain:Hit failure while processing sysprep generalize internal providers; hr = 0xc004d307 Does anyone have any ideas how I can redirect My Documents and other items in a user file to a second drive in the default profile so it affects each person logging in?

    Read the article

  • Exchange 2007 restore - Backup Exec Unable to Attach to a resource

    - by Andy
    I have been struggling with this one for months! Grateful for any advice. The setup is a windows 2003 server network, 4xservers on the domain. Two exchange 2007 servers (only one with mailboxes still on). Backup Exec (12.5) on a non-exchange server with agents on the others. Backup exec runs a full backup of exchange across the network well, at pretty reasonable speeds. However, when you try any kind of restore (individual emails, mailboxes or whole system restore - all to same location or to alternate server, RSG etc) the following message is received within about 10-15 secs of starting the job: Job ended: 24 December 2010 at 13:28:32 Completed status: Failed Final error: 0xe000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection Details tab, and then remove the resource. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-33932 Things I have already tried: Changed account to main administrator account (with all permissions) checked versions of ese.dll on both servers - both the same Checked all VSS writers on both servers are stable / normal restoring to different locations Any advice anyone could give would be much appreciated. Many thanks, Andy

    Read the article

  • How to do 'search for keyword in files' in emacs in Windows without cygwin?

    - by Anthony Kong
    I want to search for keyword, says 'action', in a bunch of files in my Windows PC with Emacs. It is partly because I want to learn more advanced features of emacs. It is also because the Windows PC is locked down by company policy. I cannot install useful applications like cygwin at will. So I tried this command: M-x rgrep It throws the following error message: *- mode: grep; default-directory: "c:/Users/me/Desktop/Project" -*- Grep started at Wed Oct 16 18:37:43 find . -type d "(" -path "*/SCCS" -o -path "*/RCS" -o -path "*/CVS" -o -path "*/MCVS" -o -path "*/.svn" -o -path "*/.git" -o -path "*/.hg" -o -path "*/.bzr" -o -path "*/_MTN" -o -path "*/_darcs" -o -path "*/{arch}" ")" -prune -o "(" -name ".#*" -o -name "*.o" -o -name "*~" -o -name "*.bin" -o -name "*.bak" -o -name "*.obj" -o -name "*.map" -o -name "*.ico" -o -name "*.pif" -o -name "*.lnk" -o -name "*.a" -o -name "*.ln" -o -name "*.blg" -o -name "*.bbl" -o -name "*.dll" -o -name "*.drv" -o -name "*.vxd" -o -name "*.386" -o -name "*.elc" -o -name "*.lof" -o -name "*.glo" -o -name "*.idx" -o -name "*.lot" -o -name "*.fmt" -o -name "*.tfm" -o -name "*.class" -o -name "*.fas" -o -name "*.lib" -o -name "*.mem" -o -name "*.x86f" -o -name "*.sparcf" -o -name "*.dfsl" -o -name "*.pfsl" -o -name "*.d64fsl" -o -name "*.p64fsl" -o -name "*.lx64fsl" -o -name "*.lx32fsl" -o -name "*.dx64fsl" -o -name "*.dx32fsl" -o -name "*.fx64fsl" -o -name "*.fx32fsl" -o -name "*.sx64fsl" -o -name "*.sx32fsl" -o -name "*.wx64fsl" -o -name "*.wx32fsl" -o -name "*.fasl" -o -name "*.ufsl" -o -name "*.fsl" -o -name "*.dxl" -o -name "*.lo" -o -name "*.la" -o -name "*.gmo" -o -name "*.mo" -o -name "*.toc" -o -name "*.aux" -o -name "*.cp" -o -name "*.fn" -o -name "*.ky" -o -name "*.pg" -o -name "*.tp" -o -name "*.vr" -o -name "*.cps" -o -name "*.fns" -o -name "*.kys" -o -name "*.pgs" -o -name "*.tps" -o -name "*.vrs" -o -name "*.pyc" -o -name "*.pyo" ")" -prune -o -type f "(" -iname "*.sh" ")" -exec grep -i -n "action" {} NUL ";" FIND: Parameter format not correct Grep exited abnormally with code 2 at Wed Oct 16 18:37:44 I believe rgrep tried to spwan a process and called 'FIND' with all the parameters. However, since it is a Windows, the default Find executable simply does not know how to handle. What is the better way to search for a keyword in multiple files in Emacs on Windows platform, without any dependency on external programs? Emacs version: 24.2.1

    Read the article

  • Install MatroskaProp on Windows 7 x64

    - by Neophytos
    To see more information in Windows Explorer property pages and menus about Matroska Video (.mkv) files, similar to what one can see when selecting native Windows media (.avi, .asf, .wmv or even just plain old mpg) files, Matroska links (from http://www.matroska.org/downloads/windows.html) to a download of the MatroskaProp shell extension (http://www.jory.info/serendipity/archives/14-MatroskaProp-2.8-Released.html). It used to work for me under Windows XP 32-bit. Now I have Windows 7 x64, and downloaded, installed and ran it. Configuration and settings page is fine. But it does not seem to actually register any shell extension. Nothing is added to Explorer windows, menus or property pages when selecting .mkv or .mks files). I tried calling the register hook manually using regsvr32.dll, that again invoked the configuration window and let me set all options, and when confirming even said the registration succeeded, but seems to have had no effect. In the registry I cannot find any traces of the shell extension being installed. Can this extension be made to work under Windows 7 or x64 systems? Are there known problems with installing this or other old shell extensions on x64, or on Windows 7?

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >