Search Results

Search found 755 results on 31 pages for 'leeks and leaks'.

Page 25/31 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • try finally block around every Object.Create?

    - by max
    Hi, I have a general question about best practice in OO Delphi. Currently, I but a try finally block around everywhere, where I create an object, to free that object after usage (to avoid memory leaks). E.g.: aObject := TObject.Create; try aOBject.AProcedure(); ... finally aObject.Free; end; instead of: aObject := TObject.Create; aObject.AProcedure(); .. aObject.Free; To you think, it is good practice, or too much overhead? And what about the performance?

    Read the article

  • Delegate Method only Firing after 5 or so Button Presses?

    - by CoDEFRo
    I'm having the most bizarre problem which I'm not even close to figuring out. I have a button which fires a delegate method. Once upon a time it was working fine, but after making some changes to my code, now the delegate method only fires after I push the button x amount of times (the changes I made to the code had nothing to do with the infrastructure that connects the delegate together). It varies, it can be 5 times to 10 times. I used the analyzer to check for memory leaks and there aren't any. There is too much code for me to paste here (I don't even know where to start or where the problem could be), but I'm wondering if anyone has experienced this problem before, or what could be causing it? This is very odd and have no clue what could be causing it.

    Read the article

  • printf("... %c ...",'\0') and family - what will happen?

    - by SF.
    How will various functions that take printf format string behave upon encountering the %c format given value of \0/NULL? How should they behave? Is it safe? Is it defined? Is it compiler-specific? e.g. sprintf() - will it crop the result string at the NULL? What length will it return? Will printf() output the whole format string or just up to the new NULL? Will va_args + vsprintf/vprintf be affected somehow? If so, how? Do I risk memory leaks or other problems if I e.g. shoot this NULL at a point in std::string.c_str()? What are the best ways to avoid this caveat (sanitize input?)

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • using ‘using’ and scope. Not try finally!

    - by Edward Boyle
    An object that implements IDisposable has, you guessed it, a Dispose() method. In the code you write you should both declare and instantiate any object that implements IDisposable with the using statement. The using statement allows you to set the scope of an object and when your code exits that scope, the object will be disposed of. Note that when an exception occurs, this will pull your code out of scope, so it still forces a Dispose() using (mObject o = new mObject()) { // do stuff } //<- out of Scope, object is disposed. // Note that you can also use multiple objects using // the using statement if of the same type: using (mObject o = new mObject(), o2 = new mObject(), o3 = new mObject()) { // do stuff } //<- out of Scope, objects are disposed. What about try{ }finally{}? It is not needed when you use the using statement. Additionally, using is preferred, Microsoft’s own documents put it this way: As a rule, when you use an IDisposable object, you should declare and instantiate it in a using statement. When I started out in .NET I had a very bad habit of not using the using statement. As a result I ran into what many developers do: #region BAD CODE - DO NOT DO try { mObject o = new mObject(); //do stuff } finally { o.Dispose(); // error - o is out of scope, no such object. } // and here is what I find on blogs all over the place as a solution // pox upon them for creating bad habits. mObject o = new mObject(); try { //do stuff } finally { o.Dispose(); } #endregion So when should I use the using statement? Very simple rule, if an object implements IDisposable, use it. This of course does not apply if the object is going to be used as a global object outside of a method. If that is the case, don’t forget to dispose of the object in code somewhere. It should be made clear that using the try{}finally{} code block is not going to break your code, nor cause memory leaks. It is perfectly acceptable coding practice, just not best coding practice in C#. This is how VB.NET developers must code, as there is no using equivalent for them to use.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • ANTS Memory Profiler 7.0

    - by Sam Abraham
    In the next few lines, I would like to briefly review ANTS Memory Profiler 7.0.  I was honored to be extended the opportunity to review this valuable tool as part of the GeeksWithBlogs influencers Program, a quarterly award providing its recipients access to valuable tools and enabling them with an opportunity to provide a brief write-up reviewing the complimentary tools they receive.   Typical Usage   ANTS Memory Profiler 7.0 is very intuitive and easy to use for any user be it novice or expert. A simple yet comprehensive menu screen enables the selection of the appropriate program type to profile as well as the executable or site for this program.   A typical use case starts with establishing a baseline memory snapshot, which tells us the initial memory cost used by the program under normal or low activity conditions. We would then take a second snapshot after the program has performed an activity which we want to investigate for memory leaks. We can then compare the initial baseline snapshot against the snapshot when the program has completed processing the activity in question to study anomalies in memory that did not get freed-up after the program has completed its performed function. The following are some screenshots outlining the selection of the program to profile (an executable for this demonstration’s purposes).   Figure 1 - Getting Started   Figure 2 - Selecting an Application to Profile     Features and Options   Right after the second snapshot is generated, Memory Profiler gives us immediate access to information on memory fragmentation, size differences between snapshots, unmanaged memory allocation and statistics on the largest classes taking up un-freed memory space.   We would also have the option to itemize objects held in memory grouped by object types within which we can study the instances allocated of each type. Filtering options enable us to quickly narrow object instances we are interested in.   Figure 3 - Easily accessible Execution Memory Information   Figure 4 - Class List   Figure 5 - Instance List   Figure 6-  Retention Graph for a Particular Instance   Conclusion I greatly enjoyed the opportunity to evaluate ANTS Memory Profiler 7.0. The tool's intuitive User Interface design and easily accessible menu options enabled me to quickly identify problem areas where memory was left unfreed in my code.     Tutorials and References  FInd out more About ANTS Memory Profiler 7.0 http://www.red-gate.com/supportcenter/Product?p=ANTS Memory Profiler   Checkout what other reviewers of this valuable tool have already shared: http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/10/ants-memory-profiler-7.0.aspx http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Is it appropriate to try to control the order of finalization?

    - by Strilanc
    I'm writing a class which is roughly analogous to a CancellationToken, except it has a third state for "never going to be cancelled". At the moment I'm trying to decide what to do if the 'source' of the token is garbage collected without ever being set. It seems that, intuitively, the source should transition the associated token to the 'never cancelled' state when it is about to be collected. However, this could trigger callbacks who were only kept alive by their linkage from the token. That means what those callbacks reference might now in the process of finalization. Calling them would be bad. In order to "fix" this, I wrote this class: public sealed class GCRoot { private static readonly GCRoot MainRoot = new GCRoot(); private GCRoot _next; private GCRoot _prev; private object _value; private GCRoot() { this._next = this._prev = this; } private GCRoot(GCRoot prev, object value) { this._value = value; this._prev = prev; this._next = prev._next; _prev._next = this; _next._prev = this; } public static GCRoot Root(object value) { return new GCRoot(MainRoot, value); } public void Unroot() { lock (MainRoot) { _next._prev = _prev; _prev._next = _next; this._next = this._prev = this; } } } intending to use it like this: Source() { ... _root = GCRoot.Root(callbacks); } void TransitionToNeverCancelled() { _root.Unlink(); ... } ~Source() { TransitionToNeverCancelled(); } but now I'm troubled. This seems to open the possibility for memory leaks, without actually fixing all cases of sources in limbo. Like, if a source is closed over in one of its own callbacks, then it is rooted by the callback root and so can never be collected. Presumably I should just let my sources be collected without a peep. Or maybe not? Is it ever appropriate to try to control the order of finalization, or is it a giant warning sign?

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • How do I prevent Ubuntu gradually slowing down to a stop?

    - by user29165
    I really do not know what is causing the problem of my desktop environment (D.E.) gradually slowing down. It seems to happen in Gnome 3.2, LXDE, XFCE, and KDE. I am running Ubuntu 11.10 and this problem started happening from a fresh install. I have noticed that if I restart Gnome or otherwise re-log-in with any of the D.E.s the speed resets to normal. I don't seem to have and memory leaks that account for it and there are not any processes that are using excessive CPU cycles. Due to the symptoms I am guessing that there is an underlying package that interacts with all D.E.s that is the source of the problem. Just to clarify, in extreme situations, if I let the slowdown continue without a restart of the D.E. my system will finally get to a point where everything, even my clock, freezes. The time over which Gnome slows down (noticeably after 17 hrs) is much less than LXDE, or XFCE but eventually they freeze as well. I didn't mention Unity since I haven't used it enough to verify the problem, but I don't see why it should be different. Just in case this is relevant, I Suspend my computer rather than turn it off. I do this because of the greatly reduced load time and the 17 hrs I stated above is actual up-time, not time where the D.E. is actually active. However, the slow down does not seem to be affected by how long the computer has been in suspend mode. I know that it is possible that this problem is due to an interaction between two or more of the applications that I use and as such it may not be able to be duplicated by others. In the end I am just wondering (a) are other people experiencing this issue and (b) does anyone have some advice on where to start looking for a solution if one is not already known. I am even open to the idea of switching to the beta release of 12.04 if anyone thinks it will solve the problem. Edit: I took a video of my latest freeze that you can watch at http://youtu.be/flKUqUzCmdE, sorry about the audio. Edit: Since that video I checked my memory for errors and tried installing the proprietary video drivers. No memory errors were found and the proprietary video drivers made the display unusable so I had to uninstall them. Any thoughts?

    Read the article

  • How bad does it look to have left a job soon after starting? [closed]

    - by unitedgremlin
    I have a job I would like to leave. On advice from friends and parents I have stayed. Their primary concern is that it would look bad on my resume if I left only after a few months of joining. My concerns with the job are as follows: When I started it was preferred I provide and use my own equipment. Could be out of business in a few months from lack of cash flow Poor code quality: memory leaks and lack of error handling. The same mistakes continue to be made even though I have raised the issue. It has become evident that co-workers do not understand memory management rules of the platform and are not interested in learning them. Yet, there is still surprise from them when strange bugs continue to crop up. As a result don't feel I will learn from co-workers. Plus, fixing the the lingering bugs and trying to keep up on new feature development is like a game of whack-o-mole that never ends. I don't believe in the companies vision or its ability to execute on the ideas anymore. My ideas and suggestions for very small tweaks are quickly dismissed. And so more than half or so have come back as bugs that we end up needing to address. I have been told to wait on fixing bugs in codes until we can talk to the original author. I don't feel I am allowed to take initiative to just fix/change things and do what I think is best. Everything needs consensus even for a bug fix before any work is done. I am adopting a shut-up and just do what I am told approach to save myself from ulcers. Lots of meetings (I am personally not involved in all of them which is good) but the sheer amount reminds me of days at a big corporation. Why is everyone around me always meeting? It's a small company. I can count everyone on my toes and fingers. I can say with certainty I have no interest in working with any of them again. This is the first time I have truly worked with a group of so called "B and C players". Ultimately, I think it is my fault for not doing a better job evaluating the team and company before joining. But I have generated a better set of questions when probing companies in the future. My questions are: How bad does it look to have left a job soon (few months) after starting? What would be the best way to explain my concerns and reasons for leaving without badmouthing the company? Should I stick it out to what I believe will be the soon end of the company?

    Read the article

  • WmiPrvSE memory leak on Windows 2008 *R2*

    - by MichaelGG
    I've seen references on Windows 2008 to WmiPrvSE leaks, but nothing about Windows 2008 R2. We're running R2 on top of Hyper-V (2008). We are also running NSClient++ for monitoring from opsview. Over time, WmiPrvSE.exe starts to use a lot of memory, causing memory alert issues (less than 10% free). VM has 2GB, WmiPrvSE consumes up to 500-600MB before I kill it. Killing the process doesn't seem to have any negative effect; it starts up again and I haven't noticed any problems. But after a day or two, it's back in the same situation. Any ideas on what to do? Resource Monitor doesn't show any Disk or Network IO by WmiPrvSE.exe. Just slowly climbing private memory... Edited to add: We aren't running clustering, or Windows System Resource Manager. The only regular WMI user I can guess is NSClient++, but we don't seem to have this problem on other servers.

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • dynamic multiple instance of swfupload (firefox vs IE)

    - by jean27
    We have this dynamic uploader which creates a new instance of swfupload. I'm a little bit confused with the outputs produced by firefox and ie. Firefox have the same output as chrome, safari and opera. Whenever I clicked a button for adding a new instance, the previous instances of swfupload in firefox refresh while IE don't. I have this debug information: For Firefox: SWF DEBUG OUTPUT IN FIREFOX ---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_0 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512466022 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-1_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_1 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512476357 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-2_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_1 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_1_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1SWF DEBUG: StartUpload: First file in queueSWF DEBUG: StartUpload(): No files found in the queue.SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_1_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_1_0. Bytes: 30218. Total: 30218SWF DEBUG: Event: uploadSuccess: File ID: SWFUpload_1_0 Response Received: true Data: 65-AddClassification.pngSWF DEBUG: Event: uploadComplete : Upload cycle complete. For IE: SWF DEBUG OUTPUT IN IE ---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_0 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512200531 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-1_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: Browse... button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_1 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512222093 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-2_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: Browse... button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_1 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)SWF DEBUG: ExternalInterface reinitializedSWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_1_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_1_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_0_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 29151. Total: 29151SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_1_0. Bytes: Total: 30218SWF DEBUG: Event: uploadSuccess: File ID: SWFUpload_0_0 Response Received: true Data: 62-Greenwich_-_Branches.pngSWF DEBUG: Event: uploadComplete : Upload cycle complete.

    Read the article

  • trying to use mod_proxy with httpd and tomcat

    - by techsjs2012
    I been trying to use mod_proxy with httpd and tomcat... I have on VirtualBox running Scientific Linux which has httpd and tomcat 6 on it.. I made two nodes of tomcat6. I followed this guide like 10 times and still cant get the 2nd node of tomcat working.. http://www.richardnichols.net/2010/08/5-minute-guide-clustering-apache-tomcat/ Here is the lines from my http.conf file <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1 BalancerMember ajp://127.0.0.1:8109 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> Now here is my server.xml from node1 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> now here is the server.xml file from node2 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8105" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node2"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> I dont know what it is. but I been trying for days

    Read the article

  • Location of Java dump heap file?

    - by Jim Ferrans
    Well this is embarrassing ... I'm starting to play with the Eclipse Memory Analyzer to look for Java memory leaks on a Windows box. Step 1 is to obtain a heap dump file. To do this< I start my Java (javaw.exe) process from within Eclipse and connect to it with jconsole. Then on the jconsole MBeans tab I click the dumpHeap button. The first time I did this, I saw a pop-up saying it had created the heap dump file, but not giving its name or location. Now whenever I do a dumpHeap again while connected to a different javaw.exe process, jconsole says: Problem invoking dumpHeap : java.io.IOException: File exists and of course doesn't give its name or path. Where could it be? I've searched my C: drive (using cygwin command line tools) for files containing "hprof" or "java_pid" or "heapdump" and didn't find anything plausible. I've even used the Windows search to look for all files in my Eclipse workspace that have changed in the last day. I'm using the Sun Java 1.6 JVM, and don't have -XX:HeapDumpPath set.

    Read the article

  • MVVM Light Toolkit - RelayCommands, DelegateCommands, and ObservableObjects

    - by DanM
    I just started experimenting with Laurent Bugnion's MVVM Light Toolkit. I think I'm going to really like it, but I have a couple questions. Before I get to them, I need to explain where I'm coming from. I currently use a combination of Josh Smith's MVVM Foundation and another project on Codeplex called MVVM Toolkit. I use ObservableObject and Messenger from MVVM Foundation and DelegateCommand and CommandReference from MVVM Toolkit. The only real overlap between MVVM Foundation and MVVM Tookit is that they both have an implementation for ICommand: MVVM Foundation has a RelayCommand and MVVM Tookit has a DelegateCommand. Of these two, DelegateCommand appears to be more sophisticated. It employs a CommandManagerHelper that uses weak references to avoid memory leaks. With that said, a couple questions: Why does MVVM Light Toolkit use RelayCommand rather than DelegateCommand? Is the use of weak references in an ICommand unnecessary or not recommended for some reason? Why is there no ObservableObject in MVVM Light? ObservableObject is basically just the part of ViewModelBase that implements INotifyPropertyChanged, but it's very convenient to have as a separate class because view-models are not the only objects that need to implement INotifyPropertyChanged. For example, let's say you have DataGrid that binds to a list of Person objects. If any of the properties in Person can change while the user is viewing the DataGrid, Person would need to implement INotifyPropertyChanged. (I realize that if Person is auto-generated using something like LinqToSql, it will probably already implement INotifyPropertyChanged, but there are cases where I need to make a view-specific version of entity model objects, say, because I need to include a command to support a button column in a DataGrid.) Thanks.

    Read the article

  • Memory leak / GLib issue.

    - by Andrei Ciobanu
    1: /* 2: * File: xyn-playlist.c 3: * Author: Andrei Ciobanu 4: * 5: * Created on June 4, 2010, 12:47 PM 6: */ 7:   8: #include <dirent.h> 9: #include <glib.h> 10: #include <stdio.h> 11: #include <stdlib.h> 12: #include <sys/stat.h> 13: #include <unistd.h> 14:   15: /** 16: * Returns a list all the file(paths) from a directory. 17: * Returns 'NULL' if a certain error occurs. 18: * @param dir_path. 19: * @param A list of gchars* indicating what file patterns to detect. 20: */ 21: GSList *xyn_pl_get_files(const gchar *dir_path, GSList *file_patterns) { 22: /* Returning list containing file paths */ 23: GSList *fpaths = NULL; 24: /* Used to scan directories for subdirs. Acts like a 25: * stack, to avoid recursion. */ 26: GSList *dirs = NULL; 27: /* Current dir */ 28: DIR *cdir = NULL; 29: /* Current dir entries */ 30: struct dirent *cent = NULL; 31: /* File stats */ 32: struct stat cent_stat; 33: /* dir_path duplicate, on the heap */ 34: gchar *dir_pdup; 35:   36: if (dir_path == NULL) { 37: return NULL; 38: } 39:   40: dir_pdup = g_strdup((const gchar*) dir_path); 41: dirs = g_slist_append(dirs, (gpointer) dir_pdup); 42: while (dirs != NULL) { 43: cdir = opendir((const gchar*) dirs->data); 44: if (cdir == NULL) { 45: g_slist_free(dirs); 46: g_slist_free(fpaths); 47: return NULL; 48: } 49: chdir((const gchar*) dirs->data); 50: while ((cent = readdir(cdir)) != NULL) { 51: lstat(cent->d_name, &cent_stat); 52: if (S_ISDIR(cent_stat.st_mode)) { 53: if (g_strcmp0(cent->d_name, ".") == 0 || 54: g_strcmp0(cent->d_name, "..") == 0) { 55: /* Skip "." and ".." dirs */ 56: continue; 57: } 58: dirs = g_slist_append(dirs, 59: g_strconcat((gchar*) dirs->data, "/", cent->d_name, NULL)); 60: } else { 61: fpaths = g_slist_append(fpaths, 62: g_strconcat((gchar*) dirs->data, "/", cent->d_name, NULL)); 63: } 64: } 65: g_free(dirs->data); 66: dirs = g_slist_delete_link(dirs, dirs); 67: closedir(cdir); 68: } 69: return fpaths; 70: } 71:   72: int main(int argc, char** argv) { 73: GSList *l = NULL; 74: l = xyn_pl_get_files("/home/andrei/Music", NULL); 75: g_slist_foreach(l,(GFunc)printf,NULL); 76: printf("%d\n",g_slist_length(l)); 77: g_slist_free(l); 78: return (0); 79: } 80:   81:   82: -----------------------------------------------------------------------------------------------==15429== 83: ==15429== HEAP SUMMARY: 84: ==15429== in use at exit: 751,451 bytes in 7,263 blocks 85: ==15429== total heap usage: 8,611 allocs, 1,348 frees, 22,898,217 bytes allocated 86: ==15429== 87: ==15429== 120 bytes in 1 blocks are possibly lost in loss record 1 of 11 88: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 89: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 90: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 91: ==15429== by 0x40971F6: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 92: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 93: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 94: ==15429== by 0x8048848: main (main.c:18) 95: ==15429== 96: ==15429== 129 bytes in 1 blocks are possibly lost in loss record 2 of 11 97: ==15429== at 0x4024F20: malloc (vg_replace_malloc.c:236) 98: ==15429== by 0x4081243: g_malloc (in /lib/libglib-2.0.so.0.2400.1) 99: ==15429== by 0x409B85B: g_strconcat (in /lib/libglib-2.0.so.0.2400.1) 100: ==15429== by 0x80489FE: xyn_pl_get_files (xyn-playlist.c:62) 101: ==15429== by 0x8048848: main (main.c:18) 102: ==15429== 103: ==15429== 360 bytes in 3 blocks are possibly lost in loss record 3 of 11 104: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 105: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 106: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 107: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 108: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 109: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 110: ==15429== by 0x8048848: main (main.c:18) 111: ==15429== 112: ==15429== 508 bytes in 1 blocks are still reachable in loss record 4 of 11 113: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 114: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 115: ==15429== by 0x409624D: ??? (in /lib/libglib-2.0.so.0.2400.1) 116: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 117: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 118: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 119: ==15429== by 0x8048848: main (main.c:18) 120: ==15429== 121: ==15429== 508 bytes in 1 blocks are still reachable in loss record 5 of 11 122: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 123: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 124: ==15429== by 0x409626F: ??? (in /lib/libglib-2.0.so.0.2400.1) 125: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 126: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 127: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 128: ==15429== by 0x8048848: main (main.c:18) 129: ==15429== 130: ==15429== 508 bytes in 1 blocks are still reachable in loss record 6 of 11 131: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 132: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 133: ==15429== by 0x4096291: ??? (in /lib/libglib-2.0.so.0.2400.1) 134: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 135: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 136: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 137: ==15429== by 0x8048848: main (main.c:18) 138: ==15429== 139: ==15429== 1,200 bytes in 10 blocks are possibly lost in loss record 7 of 11 140: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 141: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 142: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 143: ==15429== by 0x40971F6: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 144: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 145: ==15429== by 0x8048A0D: xyn_pl_get_files (xyn-playlist.c:61) 146: ==15429== by 0x8048848: main (main.c:18) 147: ==15429== 148: ==15429== 2,040 bytes in 1 blocks are still reachable in loss record 8 of 11 149: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 150: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 151: ==15429== by 0x40970AB: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 152: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 153: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 154: ==15429== by 0x8048848: main (main.c:18) 155: ==15429== 156: ==15429== 4,320 bytes in 36 blocks are possibly lost in loss record 9 of 11 157: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 158: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 159: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 160: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 161: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 162: ==15429== by 0x80489D2: xyn_pl_get_files (xyn-playlist.c:58) 163: ==15429== by 0x8048848: main (main.c:18) 164: ==15429== 165: ==15429== 56,640 bytes in 472 blocks are possibly lost in loss record 10 of 11 166: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 167: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 168: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 169: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 170: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 171: ==15429== by 0x8048A0D: xyn_pl_get_files (xyn-playlist.c:61) 172: ==15429== by 0x8048848: main (main.c:18) 173: ==15429== 174: ==15429== 685,118 bytes in 6,736 blocks are definitely lost in loss record 11 of 11 175: ==15429== at 0x4024F20: malloc (vg_replace_malloc.c:236) 176: ==15429== by 0x4081243: g_malloc (in /lib/libglib-2.0.so.0.2400.1) 177: ==15429== by 0x409B85B: g_strconcat (in /lib/libglib-2.0.so.0.2400.1) 178: ==15429== by 0x80489FE: xyn_pl_get_files (xyn-playlist.c:62) 179: ==15429== by 0x8048848: main (main.c:18) 180: ==15429== 181: ==15429== LEAK SUMMARY: 182: ==15429== definitely lost: 685,118 bytes in 6,736 blocks 183: ==15429== indirectly lost: 0 bytes in 0 blocks 184: ==15429== possibly lost: 62,769 bytes in 523 blocks 185: ==15429== still reachable: 3,564 bytes in 4 blocks 186: ==15429== suppressed: 0 bytes in 0 blocks 187: ==15429== 188: ==15429== For counts of detected and suppressed errors, rerun with: -v 189: ==15429== ERROR SUMMARY: 7 errors from 7 contexts (suppressed: 17 from 8) 190: ---------------------------------------------------------------------------------------------- I am using the above code in order to create a list with all the filepaths in a certain directory. (In my case fts.h or ftw.h are not an option). I am using GLib as the data structures library. Still I have my doubts in regarding the way GLib is allocating, de-allocating memory ? When invoking g_slist_free(list) i also free the data contained by the elements ? Why all those memory leaks appear ? Is valgrind a suitable tool for profilinf memory issues when using a complex library like GLib ? LATER EDIT: If I g_slist_foreach(l,(GFunc)g_free,NULL);, the valgrind report is different, (All the memory leaks from 'definitely lost' will move to 'indirectly lost'). Still I don't see the point ? Aren't GLib collections implement a way to be freed ?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >