Search Results

Search found 17430 results on 698 pages for 'false positive'.

Page 395/698 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • HTTP Error 500.19 - Internal Server Error

    - by peacmaker
    i am trying to add an website to iis 7 put when i try to run the website its gives the following error: HTTP Error 500.19 - Internal Server Error Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x80070021 Config Error This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault="Deny"), or set explicitly by a location tag with overrideMode="Deny" or the legacy allowOverride="false". Config File \\?\E:\Code\web\xSP\xSP Protocol\web.config Requested URL http://localhost:80/ Physical Path E:\Code\web\xSP\xSP Protocol Logon Method Not yet determined Logon User Not yet determined Config Source 136: </modules> 137: <handlers> 138: <remove name="WebServiceHandlerFactory-Integrated"/> please any help ?

    Read the article

  • How to strink matrix using array mask in Matlab?

    - by Pyrolistical
    This seems to be a very common problem of mine. data = [1 2 3; 4 5 6]; mask = [true false true]; mask = repmat(mask, 2, 1); data(mask) ==> [1; 4; 3; 6] What I wanted was [1 3; 4 6] Yes I can just reshape it to the right size, but that seems the wrong way to do it. Is there a better way? Why doesn't data(mask) return a matrix when it is actually rectangular? I understand in the general case it may not be, but in my case since my original mask is an array it always will be.

    Read the article

  • How to define an Integer bean in Struts 1.x

    - by ian_scho_es
    Hi. How do you instantiate an Integer bean, assigning a value, in the Struts 1.x framework? <bean:define id="index" type="java.lang.Integer" value="0"/> or <bean:define id="index" type="java.lang.Integer" value="${0}"/> Results in a: java.lang.ClassCastException: java.lang.String <bean:define id="index" type="java.lang.Integer" value="<%=0%>"/> Results in: The method setValue(String) in the type DefineTag is not applicable for the arguments (int) <% java.lang.Integer index = new java.lang.Integer(0); %> Works, but makes my eyes bleed. Note that I had to refactor iterating over a list but am now applying a filter within the iteration. This was the cleanest solution of all! <logic:equal name="aplicacion" property="generico" value="false" indexId="index"> Maybe I need to go about this completely differently. Many thanks.

    Read the article

  • ASP.NET treeview populate child nodes. How can I avoid a postback to server?

    - by mas_oz2k1
    I am trying to test populate on demand for a treeview. I follow the procedure from these links: http://msdn.microsoft.com/en-us/library/e8z5184w.aspx But the treeview still make a postback to the server if I expanded one of the tree nodes (If you put a breakpoint in the first line of Page_load event), thus refreshing the whole page. I am using VS2005 and Asp.net 2.0 (but the same issue occurs in VS2008) My simple test page markup is: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="aspTreeview.aspx.cs" Inherits="aspTreeview" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <table> <tr> <td style="height: 80%; width: 45%;"> <asp:Panel ID="Panel1" runat="server" BorderColor="#0033CC" BorderStyle="Solid" ScrollBars="Both"> <asp:TreeView ID="TreeView1" runat="server" ShowLines="True" PopulateNodesFromClient="True" EnableClientScript="True" NodeWrap="True" ontreenodepopulate="TreeView1_TreeNodePopulate" ExpandDepth="0"> </asp:TreeView> </asp:Panel> </td> <td style="width: 10%; height: 80%;" > <div> <asp:Button ID="Button1" runat="server" Text="->" onclick="Button1_Click" /> </div> <div> <asp:Button ID="Button2" runat="server" Text="<-" /> </div> </td> <td style="width: 136px; height: 80%"> <asp:Panel ID="Panel2" runat="server" BorderColor="Lime" BorderStyle="Solid"> <asp:TreeView ID="TreeView2" runat="server" ShowLines="True" ExpandDepth="0"> </asp:TreeView> </asp:Panel> </td> </tr> <tr> <td> </td> <td> </td> <td style="width: 136px"> </td> </tr> </table> </div> </form> </body> </html> The code behind is: protected void Page_Load(object sender, EventArgs e) { Debug.WriteLine("Page_Load started."); if (!IsPostBack) { if (Request.Browser.SupportsCallback) Debug.WriteLine("Browser supports callback scripts."); for (int i = 0; i < 3; i++) { TreeNode node = new TreeNode("ENTRY " + i.ToString()); node.Value = i.ToString(); node.PopulateOnDemand = true; node.Expanded = false; TreeView1.Nodes.Add(node); } } Debug.WriteLine("Page_Load finished."); } protected void TreeView1_TreeNodePopulate(object sender, TreeNodeEventArgs e) { TreeNode targetNode = e.Node; for (int j = 0; j < 4200; j++) { TreeNode subnode = new TreeNode(String.Format("Sub ENTRY {0} {1}", targetNode.Value, j)); subnode.PopulateOnDemand = true; subnode.Expanded = false; targetNode.ChildNodes.Add(subnode); } }

    Read the article

  • Facebook Connect showPermissionDialog callback fires before user can even see the dialog

    - by Chris Hiester
    I'm doing a Facebook Connect integration for a site and when the user logs in, I need to ask for some permissions so I use FB.Connect.showPermissionDialog. I use its callback to see if permissions were granted. If they are granted, I want to submit the form. Here's what my code looks like: $("#form3").live("submit", function() { FB.Connect.showPermissionDialog('email, offline_access', function(perms) { if (!perms) { location.href="http://www.mysite.com/logout/"; return false; } else { save_session(); } }); }); The problem is that the form submits before the user can even see the permission dialog. Has anyone seen this before?

    Read the article

  • Return SQL Query as Array in Powershell

    - by Emo
    I have a SQL 2008 Ent server with the databases "DBOne", "DBTwo", "DBThree" on the server DEVSQLSRV. Here is my Powershell script: $DBNameList = (Invoke-SQLCmd -query "select Name from sysdatabases" -Server DEVSQLSRV) This produces my desired list of database names as: Name ----- DBOne DBTwo DBThree I has been my assumption that anything that is returned as a list is an Array in Powershell. However, when I then try this in Powershell: $DBNameList -contains 'DBTwo' It comes back has "False" instead of "True" which is leading me to believe that my list is not an actual array. Any idea what I'm missing here? Thanks so much! Emo

    Read the article

  • Adding an item to an existent window

    - by farhad
    Hello! How can i add an item to an existent window? I tried win.add() but it does not seem to work. Why? This is my piece of code: function combo_service(winTitle,desc,input_param) { /* parametri */ param=input_param.split(","); /* della forma: param[0]="doc1:text", quindi da splittare di nuovo */ /* cosi' non la creo più volte */ win; if (!win) var win = new Ext.Window({ //title:Ext.get('page-title').dom.innerHTML renderTo:Ext.getBody() ,iconCls:'icon-bulb' ,width:420 ,height:240 ,title:winTitle ,border:false ,layout:'fit' ,items:[{ // form as the only item in window xtype:'form' ,labelWidth:60 ,html:desc ,frame:true ,items:[{ // textfield fieldLabel:desc ,xtype:'textfield' ,anchor:'-18' }] }] }); win.add({ // form as the only item in window xtype:'form' ,labelWidth:60 ,html:desc ,frame:true ,items:[{ // textfield fieldLabel:desc ,xtype:'textfield' ,anchor:'-18' }]}); win.show(); }; What's wrong with my code? Thank you very much.

    Read the article

  • Zend Framework: How to include an OR statement in an SQL fetchAll()

    - by Scoobler
    I am trying to build the following SQL statement: SELECT users_table.*, users_data.first_name, users_data.last_name FROM users_table INNER JOIN users_data ON users_table.id = user_id WHERE (users_table.username LIKE '%sc%') OR (users_data.first_name LIKE '%sc%') OR (users_data.last_name LIKE '%sc%') I have the following code at the moment: public function findAllUsersLike($like) { $select = $this-select(Zend_Db_Table::SELECT_WITH_FROM_PART)-setIntegrityCheck(false); $select-where('users_table.username LIKE ?', '%'.$like.'%'); $select-where('users_data.first_name LIKE ?', '%'.$like.'%'); $select-where('users_data.last_name LIKE ?', '%'.$like.'%'); $select-join('users_data', 'users_table.id = user_id', array('first_name', 'last_name')); return $this-fetchAll($select); } This is close, but not right as it uses AND to add the extra WHERE statements, instead of OR. Is there any way to do this as one select? Or should I perform 3 selects and combine the results (alot more overhead?)? P.S. The parameter $like that is past is sanitized so don't need to wory about user input in the code above!

    Read the article

  • Clean Code Developer & Certification in IT - MSCC 21.09.2013

    It was a very busy weekend this time, and quite some hectic to organise the second meetup on a Saturday for the Mauritius Software Craftsmanship Community (MSCC) but it was absolutely fun. Following, I'm writing a brief summary about the topics we spoke about and the new impulses I got. "What a meetup... I was positively impressed. At the beginning I thought that noone would actually show up but then by the time the room got filled. Lots of conversation, great dialogues and fantastic networking between fresh students, experienced students, experienced employees, and self-employed attendees. That's what community is all about!" Above quote was my first reaction shortly after the gathering. And despite being busy during the weekend and yesterday, I took my time to reflect a little bit on things happened and statements made before writing it here on my blog. Additionally, I was also very curious about possible reactions and blogs from other attendees. Reactions from other craftsmen Let me quickly give you some links and quotes from others first... "Like Jochen posted on facebook, that was indeed a 5+ hours marathon (maybe 4 hours for me but still) … Wohoo! We’re indeed a bunch of crazy geeks who did not realise how time flew as we dived into the myriad discussions that sprouted. Yet in the end everyone was happy (:" -- Ish on MSCC meetup - The marathon (: "And the 4hours spent @ Talking drums bore its fruit..I was doing something I never did before....reading the borrowed book while walking....and though I was not that familiar with things mentionned in the book...I was skimming,scanning & flipping...reading titles...short paragraphs...and I skipped pages till I reached home." -- Yannick on Mauritius Software Craftsmanship 1st Meet-up "Hi Developers, Just wanted to share with you the meetups i attended last Saturday - [...] - The second meetup is the one hosted by Jochen Kirstätter, the MSCC, where the attendees were Craftsman, no woman, this time - all sharing the same passion of being a developer - even though it is on different platforms(Windows - Windows Phone - Linux - Adobe(yes a designer) - .Net) - but we manage to sit at the same table - sharing developer views and experience in the corporate world - also talking about good practice when coding( where Jochen initiated a discussion on Clean Coding ) i could not stay till the end - but from what i have heard - the longer you stay the more fun you have till 1600. Developers in the Facebook grouping i invite you to stay tuned about the various developer communities popping up - where you can come to share and learn good practices, develop the entrepreneurial spirit, and learn and share your passion about technologies" -- Arnaud on Facebook More feedback has been posted on the event directly. So, should I really write more? Wouldn't that spoil the impressions? Starting the day with a surprise Indeed, I was very pleased to stumble over the existence of Mobile Monday Mauritius on LinkedIn, an association about any kind of mobile app development, mobile gadgets and latest smartphones on the market. Despite the Monday in their name they had scheduled their recent meeting on Saturday between 10:00 and 12:00hrs. Wow, what a coincidence! Let's grap the bull by its horns and pay them an introductory visit. As they chose the Ebene Accelerator at the Orange Tower in Ebene it was a no-brainer to leave home a bit earlier and stop by. It was quite an experience and fun to talk to the geeks over there. Really looking forward to organise something together.... Arriving at the venue As the children got a bit uneasy at the MoMo gathering and I didn't want to disturb them too much, we arrived early at Bagatelle. Well, no problems as we went for a decent breakfast at Food Lover's Market. Shortly afterwards we went to our venue location, Talking Drums, and prepared the room for the meeting. We only had to take off a repro-painting of the wall in order to have a decent area for the projector. All went very smooth and my two little ones were of great help. Just in time, our first craftsman Avinash arrived on the spot. And then the waiting started... Luckily, not too long. Bit by bit more and more IT people came to join our meeting. Meanwhile, I used the time to give a brief introduction about the MSCC in general, what we are (hm, maybe I am) trying to achieve and that the recent phase is completely focused on creating more awareness that a community like the MSCC is active here in Mauritius. As soon as we reached some 'critical mass' of about ten people I asked everyone for a short introduction and bio, just in case... Conversation between participants started to kick in and we were actually more networking than having a focus on our topics of the day. Quick updates on latest news and development around the MSCC Finally, Clean Code Developer No matter how the position is actually called, whether it is Software Engineer, Software Developer, Programmer, Architect, or Craftsman, anyone working in IT is facing almost the same obstacles. As for the process of writing software applications there are re-occurring patterns and principles combined with some common exercise and best practices on how to resolve them. Initiated by the must-read book 'Clean Code' by Robert C. Martin (aka Uncle Bob) the concept of the Clean Code Developer (CCD) was born already some years ago. CCD is much likely to traditional martial arts where you create awareness of certain principles and learn how to apply practices to improve your style. The CCD initiative recommends to indicate your level of knowledge and experience with coloured wrist bands - equivalent to the belt colours - for various reasons. Frankly speaking, I think that the biggest advantage here is provided by the obvious recognition of conceptual understanding. For example, take the situation of a team meeting... A member with a higher grade in CCD, say Green grade, sees that there are mainly Red grades to talk to, and adjusts her way of communication to their level of understanding. The choice of words might change as certain elements of CCD are not yet familiar to all team members. So instead of talking in an abstract way which only Green grades could follow the whole scenario comes down to Red grade level. Different story, better results... Similar to learning martial arts, we only covered two grades during this occasion - black and red. Most interestingly, there was quite some positive feedback and lots of questions about the principles and practices of the red grade. And we gathered real-world examples from various craftsman and discussed them. Following the Clean Code Developer Red Grade and some annotations from our meetup: CCD Red Grade - Principles Don't Repeat Yourself - DRY Keep It Simple, Stupid (and Short) - KISS Beware of Optimisations! Favour Composition over Inheritance - FCoI Interestingly most of the attendees already heard about those key words but couldn't really classify or categorize them. It's very similar to a situation in which you do not the particular for a thing and have to describe it to others... until someone tells you the actual name and suddenly all is very simple. CCD Red Grade - Practices Follow the Boy Scouts Rule Root Cause Analysis - RCA Use a Version Control System Apply Simple Refactoring Pattern Reflect Daily Introduction to the principles and practices of Clean Code Developer - here: Red Grade As for the various ToDo's we commonly agreed that the Boy Scout Rule clearly is not limited to software development or IT administration but applies to daily life in general. Same for the root cause analysis, btw. We really had good stories with surprisingly endings and conclusions. A quick check about who is using a version control system brought more drive into the conversation. Not only that we had people that aren't using any VCS at all, we also had the 'classic' approach of backup folders and naming conventions as well as the VCS 'junkie' that has to use multiple systems at a time. Just for the records: Git and GitHub seem to be in favour of some of the attendees. Regarding the daily reflection at the end of the day we came up with an easy solution: Wrap it up as a blog entry! Certifications in IT This is kind of a controversy in IT in general. Is it interesting to go for certifications or are they completely obsolete? What are the possibilities to get certified? What are the options we have in Mauritius? How would certificates stand compared to other educational tracks like Computer Science or Web Design. The ratio between craftsmen with certifications like MCP, MSTS, CCNA or LPI versus the ones without wasn't in favour for the first group but there was a high interest in the topic itself and some were really surprised to hear that exam preparations are completely free available online including temporarily voucher codes for either discounts or completely free exams. Furthermore, we discussed possible options on forming so-called study groups on a specific certificates and organising more frequent meetups in order to learn together. Taking into consideration that we have sponsored access to the video course material of Pluralsight (and now PeepCode as well as TrainSignal), we might give it a try by the end of the year. Current favourites are LPIC Level 1 and one of the Microsoft exams 40-78x. Feedback and ideas for the MSCC The closing conversations and discussions about how the MSCC is recently doing, what are the possibilities and what's (hopefully) going to happen in the future were really fertile and I made a couple of mental bullet points which I'm looking forward to tackle down together with orher craftsmen. Eventually, it might be a good option to elaborate on some issues during our weekly Code & Coffee sessions one Wednesday morning. Active discussion on various IT topics like certifications (LPI, MCP, CCNA, etc) and sharing experience Finally, we made it till the end of the planned time. Well, actually the talk was still on and we continued even after 16:00hrs. Unfortunately, we (the children and I) had to leave for evening activities. My resume of the day... It was great to have 15 craftsmen in one room. There are hundreds of IT geeks out there in Mauritius, and as Mauritius Software Craftsmanship Community we still have a lot of work to do to pass on the message to some more key players and companies. Currently, it seems that we are able to attract a good number of students in Computer Science... but we have a lot more to offer, even or especially for IT people on the job. I'm already looking forward to our next Saturday meetup in the near future. PS: Meetup pictures are courtesy of Nirvan Pagooah. Thanks for sharing...

    Read the article

  • Prevent Windows Explorer from interfering with Directory operations.

    - by Bruno Martinez
    Sometimes, no "foo" directory is left after running this code: string folder = Path.Combine(Path.GetTempPath(), "foo"); if (!Directory.Exists(folder)) Directory.CreateDirectory(folder); Process.Start(@"c:\windows\explorer.exe", folder); Thread.Sleep(TimeSpan.FromSeconds(5)); Directory.Delete(folder, false); Directory.CreateDirectory(folder); It seems Windows Explorer keeps a reference to the folder, so the last CreateDirectory has nothing to do, but then the original folder is deleted. How can I fix the code?

    Read the article

  • bitmap button not displaying in 3D style

    - by Rohit Sasikumar
    Hi, I want to display on my dialog, a bitmap button. I am using the below code CImage image; hr = image.Load(_T("myimage.png")); // just change extension to load jpg bitmap.Attach(image.Detach()); m_button.ModifyStyle(0,BS_BITMAP); m_button.SetBitmap(bitmap); This way bitmap is correctly displayed on button, but the button is not displayed in 3D style as normal buttons would look. I have set owner drawn property to false, still it displaying like this. Any ideas as to what could be wrong? Thanks, Rohit

    Read the article

  • Problem reading from the StandarOutput from ftp.exe. Possible System.Diagnostics.Process Framework b

    - by SoMoS
    Hello, I was trying some stuff executing console applications when I found this problem handling the I/O of the ftp.exe command that everybody has into the computer. Just try this code: m_process = New Diagnostics.Process() m_process.StartInfo.FileName = "ftp.exe" m_process.StartInfo.CreateNoWindow = True m_process.StartInfo.RedirectStandardInput = True m_process.StartInfo.RedirectStandardOutput = True m_process.StartInfo.UseShellExecute = False m_process.Start() m_process.StandardInput.AutoFlush = True m_process.StandardInput.WriteLine("help") MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) This should show you the text that ftp sends you when you do that from the command line: Los comandos se pueden abreviar. Comandos: ! delete literal prompt send ? debug ls put status append dir mdelete pwd trace ascii disconnect mdir quit type bell get mget quote user binary glob mkdir recv verbose bye hash mls remotehelp cd help mput rename close lcd open rmdir Instead of that I'm getting the first line and 3 more with garbage, after that the call to ReadLine block like if there was no data available. Any hints about that?

    Read the article

  • How can I use jQuery to match a string inside the current URL of the window I am in?

    - by Jannis
    Hi, I have used the excellent gskinner.com/RegExr/ tool to test my string matching regex but I cannot figure out how to implement this into my jQuery file to return true or false. The code I have is as follows: ^(http:)\/\/(.+\.)?(stackoverflow)\. on a url such as http://stackoverflow.com/questions/ask this would match (according to RegExr) http://stackoverflow. So this is great because I want to try matching the current window.location to that string, but the issue I am having is that this jQuery/js script does not work: var url = window.location; if ( url.match( /^(http:)\/\/(.+\.)?(stackoverflow)\./ ) ) { alert('this works'); }; Any ideas on what I am doing wrong here? Thanks for reading. Jannis

    Read the article

  • stringtemplate .net dynamic object

    - by Mark Milford
    Hi I am using string template to render some content, but the content may be variable so not sure how to pass it in (using .net / c#) Basic idea is I have a List which need to end up as parameters, e.g. List<KeyValuePair<string, object>> ret = new List<KeyValuePair<string, object>>(); ret.Add(new KeyValuePair<string, object>("elem1", true)); ret.Add(new KeyValuePair(string, object>("elem2", false)); Now I want these to show up in string template as: $item.elem1$ $item.elem2$ I can get them to be $elem1$ or $elem2$ but i need them inside of a structure. So I in effect need to convince the string template setAttribute that I'm passing in an object with properties elem1 and elem2 when in fact I have a List of KeyValuePairs. Thanks

    Read the article

  • Asp.net HttpWebResponse - how can I not depend on WebException for flow control?

    - by Campos
    I need to check whether the request will return a 500 Server Internal Error or not (so getting the error is expected). I'm doing this: HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest; request.Method = "GET"; HttpWebResponse response = request.GetResponse() as HttpWebResponse; if (response.StatusCode == HttpStatusCode.OK) return true; else return false; But when I get the 500 Internal Server Error, a WebException is thrown, and I don't want to depend on it to control the application flow - how can this be done?

    Read the article

  • Viewing HTML inside Applet without using JEditorPane

    - by Tom
    Hello, I have a small (500kb) swing applet that displays very simple/limited set of small HTML page(s) inside it with JEditorPane, however this does not seem to work 100% fluently, some customers get a blank page displayed without any java exceptions. The page works OK from my machine. I need a more reliable way to show HTML page to all our users. Any ideas if there is a small + free class to use instead of JEditorPane OR is there an easy fix to make it more reliable (non blank) private JEditorPane m_editorPane = new JTextPane(); m_editorPane.setEditable( false); m_editorPane.setBackground(new Color(239 ,255, 215)); m_editorPane.setBounds(30,42,520,478 ); m_editorPane.setDoubleBuffered(true); m_editorPane.setBorder(null); m_editorPane.registerEditorKitForContentType("text/html", "com.xxxxx.SynchronousHTMLEditorKit"); m_editorPane.setPage(ResourceLoader.getURLforDataFile(param.trim()));

    Read the article

  • OpenFileDialog.AutoUpgradeEnabled doesn't work under Vista or 7?

    - by Digiku
    If I specify OpenFileDialog.AutoUpgradeEnabled = true, my program still shows the old XP-style dialog. Any idea why this would happen? This is after I enable theming in Main() [STAThread] static void Main() { Application.EnableVisualStyles(); Application.Run(new Primary()); } and this is my dialog code: private void OpenProgramFile() { OpenFileDialog programFileDialog = new OpenFileDialog(); programFileDialog.Filter = "Program files (*.exe;*.lnk)|*.exe|All files (*.*)|*.*"; programFileDialog.FilterIndex = 0; programFileDialog.Title = "Select program file"; programFileDialog.AutoUpgradeEnabled = true; programFileDialog.ShowHelp = true; DialogResult fileResult = programFileDialog.ShowDialog(); if (fileResult != DialogResult.OK) return false; programFileDialog.Dispose(); } So why would AutoUpgradeEnabled not work?

    Read the article

  • When should assertions stay in production code?

    - by Carl Seleborg
    Hi all, There's a discussion going on over at comp.lang.c++.moderated about whether or not assertions, which in C++ only exist in debug builds by default, should be kept in production code or not. Obviously, each project is unique, so my question here is not so much whether assertions should be kept, but in which cases this is recommendable/not a good idea. By assertion, I mean: A run-time check that tests a condition which, when false, reveals a bug in the software. A mechanism by which the program is halted (maybe after really minimal clean-up work). I'm not necessarily talking about C or C++. My own opinion is that if you're the programmer, but don't own the data (which is the case with most commercial desktop applications), you should keep them on, because a failing asssertion shows a bug, and you should not go on with a bug, with the risk of corrupting the user's data. This forces you to test strongly before you ship, and makes bugs more visible, thus easier to spot and fix. What's your opinion/experience? Cheers, Carl See related question here

    Read the article

  • Does anyone know what these Oracle AQ JMS XA properties do?

    - by Alan Chan
    I'm using Oracle Advanced Queues via JMS from within Websphere App Server. Does anyone know what effect the following two properties have:- - oracle.jms.useEmulatedXA - oracle.jms.useNativeXA I have seen mentioned in some blogs and quick start guides, usually in sentences along the lines of "Add -Doracle.jms.useEmulatedXA=false -Doracle.jms.useNativeXA=true to the JAVA_PROPERTIES variable", without any explanation as to what they do:- e.g. http://biemond.blogspot.com/2008/11/using-aq-in-weblogic-103.html http://sqltech.cl/doc/oas10gR31/integrate.1013/b28994/adptr_aq.htm#CHDEADFB I'm curious as to what these two properties actually do, and what the implications of setting them are, even though they don't seem to have any affect on our app regardless of whether we set them or not. Googling hasn't given any answers, does anyone have any clue what they actually do?

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • ASP.NET MVC 2 DropDownList not rendering

    - by Tomaszewski
    Hi, so I don't understand what I am doing wrong here. I want to populate a DropDownList inside the master page of my ASP.NET MVC 2 app. Projects.Master <div id="supaDiv" class="mainNav"> <% Html.DropDownList("navigationList"); %> </div> MasterController.cs namespace ProjectsPageMVC.Controllers.Abstracts { public abstract class MasterController : Controller { public MasterController() { List<SelectListItem> naviList = new List<SelectListItem>(); naviList.Add(new SelectListItem { Selected = true, Text = "AdvanceWeb", Value = "http://4168web/advanceweb/" }); naviList.Add(new SelectListItem { Selected = false, Text = " :: AdvanceWeb Admin", Value = "http://4168web/advanceweb/admin/admindefault.aspx" }); ViewData["navigationList"] = naviList; } } } ProjectsController namespace ProjectsPageMVC.Controllers { public class ProjectsController : MasterController { public ActionResult Index() { return View(); } } } The DropDownList is not even showing up in the DOM and I am at a loss as to what I am doing wrong.

    Read the article

  • files build execution order

    - by Mahesh
    Hi, I have a data structure which is as given below: class File { public string Value { get; set; } public File[] Dependencies { get; set; } public bool Change { get; private set; } public File(string value,File[] dependencies) { Value = value; Dependencies = dependencies; Change = false; } } Basically, this data structure follows a typical build execution of files. Each File has a value and a list of dependencies which is again of type File. Every file is exposed with a property called Change which tells whether the file is changed or not. I brainstormed to form a algorithm which goes through all these files and build in an order( i.e typical build process ) but haven't got a better algorithm. Can anyone throw some light on this? Thanks a lot. Mahesh

    Read the article

  • ISS Not working

    - by 3bd
    I have a web site that built on Visual studio 2008 and i need to run it from my computer (Win 7 Ultimate) as a server I tried to publish it to IIS and this is simply not working and i have the flowing error : Error Summary HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Config Error This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault="Deny"), or set explicitly by a location tag with overrideMode="Deny" or the legacy allowOverride="false". any one can help?

    Read the article

  • How can I test to see if a class contains a particular attribute?

    - by BryanWheelock
    How can I test to see if a class contains a particular attribute? In [14]: user = User.objects.get(pk=2) In [18]: user.__dict__ Out[18]: {'date_joined': datetime.datetime(2010, 3, 17, 15, 20, 45), 'email': u'[email protected]', 'first_name': u'', 'id': 2L, 'is_active': 1, 'is_staff': 0, 'is_superuser': 0, 'last_login': datetime.datetime(2010, 3, 17, 16, 15, 35), 'last_name': u'', 'password': u'sha1$44a2055f5', 'username': u'DickCheney'} In [25]: hasattr(user, 'username') Out[25]: True In [26]: hasattr(User, 'username') Out[26]: False I'm having a weird bug where more attributes are showing up than I actually define. I want to conditionally stop this. e.g. if not hasattr(User, 'karma'): User.add_to_class('karma', models.PositiveIntegerField(default=1))

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >