Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 267/555 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • ORA-4031 Troubleshooting Tool ???

    - by Takeyoshi Sasaki
    ORA-4031 ???????????????????? SGA ????????????(??????)??????????????????????????????????????????????????? ORA-4031 ??????????????????? ORA-4031 Troubleshooting Tool ??????????? ORA-4031 Troubleshooting Tool ?? ORA-4031 Troubleshooting Tool ? ORA-4031 ????????? ORA-4031 ???????????????????????????????????WEB????????????????????????????????My Oracle Support ??????????????????????? ORA-4031 ??????????????????????????????ORA-4031 ?????????????????????????????? ORA-4031 Troubleshooting Tool ???????? My Oracle Support ?????? Diagnostic Tools Catalog ??  ORA-4031 Troubleshooting Tool ???????????????? ORA-4031 Troubleshooting Tool ??????????????? ORA-4031 Troubleshooting Tool ????? ???2??????????????? ORA-4031 ?????????????????????????????ORA-4031 Troubleshooting Tool ????????????????????????????????????????ORA-4031 ???????????????????? ??????????????? ORA-4031???????????? ????????????? ORA-4031 ?????? AWR???????????????????????????????????????????????????????????? ????·???????·???????????? ORA-4031 ?????????? ????????? SR ?????????????????????????? [ADR] ????·??????·?????????? [10g ???] AWR?????(STATSPACK????)???? ?????????? ORA-4031 ????????????????????????????????????????????? ORA-4031 ?????????????? ORA-4031 ??????1?1??????????? ORA-4031 Troubleshooting Tool ???????????(??????????????)???????? ORA-4031 ????????? ???????????????????????????????????????????????? ??????????????????????????????????????1)High Session_Cached_Cursor Setting Causing Excessive Consumption of Shared Pool???SESSION_CACHED_CURSOR ??????????????????????????????????????????????2)Insufficient SGA Free Memory at StartupThis issue could occur if in the init.ora parameters of your Alert log, (shared_pool_size + large_pool_size + java_pool_size + db_keep_cache_size + streams_pool_size + db_cache_size) / sga_target is greater than 90%.????????????????????????? ?????????????? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size, db_cache_size ????? /sga_target ??? 90% ??????????????????sga_target ??? memory_target ??????????????????????????????????????????????????????????????????? shared_pool_size ???????????????????????????????????????????????????????????????????????(?????????)? sga_target ????????????????????????????????????????????????????????????? ?????????????????????????????????????????????1)In your Alert log,* Look for parameters under "System parameters with non-default values:". If session_cached_cursor * 2000 / shared_pool_size is greater than 10%, then session_cached_cursors are consuming significant shared_pool_size.??? ????????????? "System parameters with non-default values:" ????  session_cached_cursor * 2000 ??? shared_pool_size ? 10% ???????????????????????? ???2) In your Alert log, SGA Utilization (Sum of shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size over sga_target) is 99%, which might be too high. ??? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size ???? sga_target ? 99% ???????????????????? ?????????????????????????????? My Oracle Support ??????????????????????????1)Decrease the parameter SESSION_CACHED_CURSORSSESSION_CACHED_CURSORS ???????????????????????2)Reduce the minimum values for the dynamic SGA components to allow memory manager to make changes as neededSGA ?????????????????(???)?????????????????? 2????????????????????? ORA-4031 Trobuleshooting Tool ????????????????????????????????????????? ORA-4031 Troubleshooting Tool ?????? ORA-4031 ??????????????????????ORA-4031 Troubleshooting Tool ???????????????????????????????????????????????????????ORA-4031 ????????????????????????????ORA-4031 ??????????????? ORA-4031 Troubleshooting Tool ?????????

    Read the article

  • Manually setting breakpoints in WinDBG

    - by chris
    I am trying to examine the assembly for an executable using WinDBG, but I am having a hard time getting to it. I want to set a breakpoint at the first instruction in my program, but when I try to do that manually (using the address of the module), WinDBG tells me that it is "unable to insert breakpoint" at that location due to an "Invalid access to memory location." I notice that when I create a breakpoint through the source code GUI, the address is not the same as the first part of my module (In my example: "Win32FileOpen", a simple program I wrote.) Is there a header of some sort that requires adding an offset to the address of my module? In another question, I saw the suggestion: "I would attempt to calculate the breakpoint address as: Module start + code start + code offset" but was unsure where to obtain those values. Can somebody please elaborate on this? The reason I don't just use the source GUI is that I want to be able to do this with a program that I may not have the source/symbols for. If there is an easier way to immediately start working with the executable I open, please let me know. (e.g. Opening an .exe Olly immediately shows me the assembly for that .exe, searching for referenced strings gives me results from that module, etc. WinDBG seems to start me off in ntdll.dll, which is not usually useful for me.) 0:000> lm start end module name 00000000`00130000 00000000`0014b000 Win32FileOpen C (private pdb symbols) C:\cfinley\code\Win32FileOpen\Debug\Win32FileOpen.pdb 00000000`73bd0000 00000000`73c2c000 wow64win (deferred) 00000000`73c30000 00000000`73c6f000 wow64 (deferred) 00000000`74fe0000 00000000`74fe8000 wow64cpu (deferred) 00000000`77750000 00000000`778f9000 ntdll (pdb symbols) c:\symbols\mssymbols\ntdll.pdb\15EB43E23B12409C84E3CC7635BAF5A32\ntdll.pdb 00000000`77930000 00000000`77ab0000 ntdll32 (deferred) 0:000> bu 00000000`00130000 0:000> bl 0 e x86 00000000`001413a0 0001 (0001) 0:**** Win32FileOpen!main <-- One that is generated via GUI 1 e x86 00000000`00130000 0001 (0001) 0:**** Win32FileOpen!__ImageBase <-- One I tried to set manually 0:000> g Unable to insert breakpoint 1 at 00000000`00130000, Win32 error 0n998 "Invalid access to memory location." bp1 at 00000000`00130000 failed WaitForEvent failed ntdll!LdrpDoDebuggerBreak+0x31: 00000000`777fcb61 eb00 jmp ntdll!LdrpDoDebuggerBreak+0x33 (00000000`777fcb63)

    Read the article

  • System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process

    - by Darryl Braaten
    I am trying to diagnose this exception. "System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process this command. (Exception from HRESULT: 0x80070008) at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(RuntimeType objectType) at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(Type objectType) at System.Runtime.Remoting.Activation.ActivationServices.CreateInstance(Type serverType) at System.Runtime.Remoting.Activation.ActivationServices.IsCurrentContextOK(Type serverType, Object[] props, Boolean bNewObj) at Oracle.DataAccess.Client.CThreadPool..ctor() at Oracle.DataAccess.Client.OracleCommand.set_CommandTimeout(Int32 value) ... It does not look like any of the normal types of "storage" have hit any limits. The application is using about 400MB of memory, 70 threads, 2000 handles and the hard drive has many GB free. The machine is running Windows 2003 Enterprise server with 16GB of RAM so memory shouldn't be an issue. The application is running as a windows service so there are no GDI objects being used. Running out of GDI handles is a common cause of this exception. Database connections, commands & readers are all all wrapped with using blocks so they should be getting cleaned up correctly.

    Read the article

  • VS 2012 / 2013 AccessViolationException

    - by Goran
    When I run the project (F5) I receive the following exception in IDE: An unhandled exception of type 'System.AccessViolationException' occurred in System.Windows.Forms.dll Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Stack trace reports at System.Windows.Forms.UnsafeNativeMethods.SendMessage(HandleRef hWnd, Int32 msg, IntPtr wParam, IntPtr lParam) at System.Windows.Forms.Control.SendMessage(Int32 msg, Int32 wparam, IntPtr lparam) at System.Windows.Forms.Form.UpdateWindowIcon(Boolean redrawFrame) at System.Windows.Forms.Form.CreateHandle() at System.Windows.Forms.Control.get_Handle() at Microsoft.VisualStudio.HostingProcess.HostProc.RunParkingWindowThread() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() I have never noticed receiving the same exception when running without debugger (CTRL+F5). This is a WPF project, but exception occurs before the App_ctor is executed, so this is external code, and my application code did not start to execute. This happens sporadically, sometimes it happens only once, and sometimes I run the project and get this message for several times in a roll. Then it does not pop up for 5-6 runs, and then starts again. Anyone knows why is this happening? I have just installed clean W8.1 64 bit, VS2013 and TFS 2013 (although I had the same problem with W8 and VS2012, but not as often).

    Read the article

  • MemoryStream and BitmapSource

    - by Vinjamuri
    I have a MemoryStream of 10K which was created from a bitmap of 2MB and compressed using JPEG. Since MemoryStream can’t directly be placed in System.Windows.Controls.Image for the GUI, I am using the following intermediate code to convert this back to BitmapImage and eventually System.Windows.Controls.Image. System.Windows.Controls.Image image = new System.Windows.Controls.Image(); BitmapImage imageSource = new BitmapImage(); s.SlideInformation.Thumbnail.Seek(0, SeekOrigin.Begin); imageSource.BeginInit(); imageSource.CacheOption = BitmapCacheOption.OnDemand; imageSource.CreateOptions = BitmapCreateOptions.DelayCreation; imageSource.StreamSource = s.SlideInformation.Thumbnail; imageSource.EndInit(); imageSource.Freeze(); image.Source = imageSource; ret = image.Source; My question is, when I store this in BitmapImage, the memory allocation is taking around 2MB. Is this expected? Is there any way to reduce the memory? I have around 300 thumbnails and this converstion takes around 600MB, which is very high. Appreciate your help!

    Read the article

  • pylucene: install error

    - by Pradeep
    I am trying to install Pylucene (pylucene-3.3-3-src.tar.gz) on my ubuntu linux 11.10. I have python 2.7.2. I was able to compile JCC (I think) because I didnt see any error when I installed it. When I tried to install Pylucene I get the following error. Can someone help? Thanks. ICU not installed /usr/bin/python -m jcc --shared --jar lucene-java-3.3/lucene/build/lucene-core-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/analyzers/common/lucene-analyzers-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/memory/lucene-memory-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/highlighter/lucene-highlighter-3.3.jar --jar build/jar/extensions.jar --jar lucene-java-3.3/lucene/build/contrib/queries/lucene-queries-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/grouping/lucene-grouping-3.3.jar --package java.lang java.lang.System java.lang.Runtime --package java.util java.util.Arrays java.util.HashMap java.util.HashSet java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator --package java.util.regex --package java.io java.io.StringReader java.io.InputStreamReader java.io.FileInputStream --exclude org.apache.lucene.queryParser.Token --exclude org.apache.lucene.queryParser.TokenMgrError --exclude org.apache.lucene.queryParser.QueryParserTokenManager --exclude org.apache.lucene.queryParser.ParseException --exclude org.apache.lucene.search.regex.JakartaRegexpCapabilities --exclude org.apache.regexp.RegexpTunnel --exclude org.apache.lucene.analysis.cn.smart.AnalyzerProfile --python lucene --mapping org.apache.lucene.document.Document 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' --sequence java.util.AbstractList 'size:()I' 'get:(I)Ljava/lang/Object;' --rename org.apache.lucene.search.highlight.SpanScorer=HighlighterSpanScorer --version 3.3 --module python/collections.py --module python/ICUNormalizer2Filter.py --module python/ICUFoldingFilter.py --module python/ICUTransformFilter.py --files 3 --build /usr/bin/python: No module named jcc make: *** [compile] Error 1 Here is my Makefile configuration which I uncommented PREFIX_PYTHON=/usr ANT=ant PYTHON=$(PREFIX_PYTHON)/bin/python JCC=$(PYTHON) -m jcc --shared NUM_FILES=3

    Read the article

  • Greyscale Image from YUV420p data

    - by fergs
    From what I have read on the internet the Y value is the luminance value and can be used to create a grey scale image. The following link: http://www.bobpowell.net/grayscale.htm, has some C# code on working out the luminance of a bitmap image : { Bitmap bm = new Bitmap(source.Width,source.Height); for(int y=0;y<bm.Height;y++) public Bitmap ConvertToGrayscale(Bitmap source) { for(int x=0;x<bm.Width;x++) { Color c=source.GetPixel(x,y); int luma = (int)(c.R*0.3 + c.G*0.59+ c.B*0.11); bm.SetPixel(x,y,Color.FromArgb(luma,luma,luma)); } } return bm; } I have a method that returns the YUV values and have the Y data in a byte array. I have the current piece of code and it is failing on Marshal.Copy – attempted to read or write protected memory. public Bitmap ConvertToGrayscale2(byte[] yuvData, int width, int height) { Bitmap bmp; IntPtr blue = IntPtr.Zero; int inputOffSet = 0; long[] pixels = new long[width * height]; try { for (int y = 0; y < height; y++) { int outputOffSet = y * width; for (int x = 0; x < width; x++) { int grey = yuvData[inputOffSet + x] & 0xff; unchecked { pixels[outputOffSet + x] = UINT_Constant | (grey * INT_Constant); } } inputOffSet += width; } blue = Marshal.AllocCoTaskMem(pixels.Length); Marshal.Copy(pixels, 0, blue, pixels.Length); // fails here : Attempted to read or write protected memory bmp = new Bitmap(width, height, width, PixelFormat.Format24bppRgb, blue); } catch (Exception) { throw; } finally { if (blue != IntPtr.Zero) { Marshal.FreeHGlobal(blue); blue = IntPtr.Zero; } } return bmp; } Any help would be appreciated?

    Read the article

  • Drawing custom graphics on the iPhone: CALayer vs. CGContext

    - by Henry Cooke
    Hi all, I have an application in which I'm doing some custom drawing, a bunch of lines on a gradient background, like so (ignore the text, they're just UILabels): At the moment, that's all done by starting a new CGContext, drawing stuff into it with CGContextDrawLinearGradient and CGContextStrokePath, then finally saving the resulting image with UIGraphicsGetImageFromCurrentImageContext. The positioning info is calculated while I'm laying out those labels, so it'd be a PITA (and duplication of effort) to calculate it all over again when the containing UIView is drawn with drawRect, so I'm drawing it ahead of time into a UIImage. All works fine, so far so good. However, I have a sneaking suspicion that it may be more efficient to use CALayers to do this drawing. My (cursory) understanding of the difference between the two approaches is that a CALayer is more like a bunch of instructions to draw stuff, and so takes up less memory until it's actually drawn onscreen, whereas drawing everything into a UIImage ahead of time means that you've got a sodding great bitmap kicking around in memory all the time, whether it's drawn or not. Is that a correct understanding? What is generally considered to be the best way of drawing custom images on the iPhone?

    Read the article

  • Get an IDataReader from a typed List

    - by Jason Kealey
    I have a List<MyObject> with a million elements. (It is actually a SubSonic Collection but it is not loaded from the database). I'm currently using SqlBulkCopy as follows: private string FastInsertCollection(string tableName, DataTable tableData) { string sqlConn = ConfigurationManager.ConnectionStrings[SubSonicConfig.DefaultDataProvider.ConnectionStringName].ConnectionString; using (SqlBulkCopy s = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.TableLock)) { s.DestinationTableName = tableName; s.BatchSize = 5000; s.WriteToServer(tableData); s.BulkCopyTimeout = SprocTimeout; s.Close(); } return sqlConn; } I use SubSonic's MyObjectCollection.ToDataTable() to build the DataTable from my collection. However, this duplicates objects in memory and is inefficient. I'd like to use the SqlBulkCopy.WriteToServer method that uses an IDataReader instead of a DataTable so that I don't duplicate my collection in memory. What's the easiest way to get an IDataReader from my list? I suppose I could implement a custom data reader (like here http://blogs.microsoft.co.il/blogs/aviwortzel/archive/2008/05/06/implementing-sqlbulkcopy-in-linq-to-sql.aspx) , but there must be something simpler I can do without writing a bunch of generic code. Edit: It does not appear that one can easily generate an IDataReader from a collection of objects. Accepting current answer even though I was hoping for something built into the framework.

    Read the article

  • Silverlight 4 WriteableBitmap ScaleTransform Exception but was working in v3

    - by Imran
    I am getting the following exception for code that used to work in silverlight 3 but has stopped working since upgrading to silverlight 4: System.AccessViolationException was unhandled Message=Attempted to read or write protected memory. This is often an indication that other memory is corrupt. namespace SilverlightApplication1 { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { var OpenFileDialog = new OpenFileDialog(); OpenFileDialog.Filter = "*.jpg|*.jpg"; if (OpenFileDialog.ShowDialog() == true) { var file = OpenFileDialog.Files.ToArray()[0]; ScaleStreamAsBitmap(file.OpenRead(), 200); } } public static WriteableBitmap ScaleStreamAsBitmap(Stream file, int maxEdgeLength) { file.Position = 0; var src = new BitmapImage(); var uiElement = new System.Windows.Controls.Image(); WriteableBitmap b = null; var t = new ScaleTransform(); src.SetSource(file); uiElement.Source = src; //force render uiElement.Effect = new DropShadowEffect() { ShadowDepth = 0, BlurRadius = 0 }; ; //calc scale double scaleX = 1; double scaleY = 1; if (src.PixelWidth > maxEdgeLength) scaleX = ((double)maxEdgeLength) / src.PixelWidth; if (src.PixelHeight > maxEdgeLength) scaleY = ((double)maxEdgeLength) / src.PixelHeight; double scale = Math.Min(scaleX, scaleY); t.ScaleX = scale; t.ScaleY = scale; b = new WriteableBitmap(uiElement, t); return b; } } } Thanks

    Read the article

  • File path for J2ME FileConnection?

    - by Kilnr
    Hi, I'm writing a MIDlet which needs to write file. I'm using FileConnection from JSR-75 to accomplish this. The intention is to have this MIDlet runnning on as much devices as possible (all MIDP 2.0 devices with JSR-75 support, ideally). On several emulators and an HTC Touch Pro2, I can perfectly use the following code to get the root of the filesystem: Enumeration drives = FileSystemRegistry.listRoots(); String root = (String) drives.nextElement(); String path = "file:///" + root; However, on a Nokia S60 5th edition emulator, trying to open a FileConnection to this path throws a java.lang.SecurityException. Apparently S60 devices do not allow connections to the root of the filesystem. I realise I can use something like System.getProperty("fileconn.dir.photos"), but that isn't supported on all devices either. So, my actual question: what is the best approach to get a path to create a FileConnection with, that allows for maximum portability? Thanks. Edit: I suppose I could iterate over all the roots in the Enumeration, and check for a writable one, but that's hardly optimal for two reasons. First, there aren't necessarily any writable roots. Second, this could be the phone memory or a memory card, so the storage method wouldn't be consistent across devices, which is rather ugly.

    Read the article

  • java.lang.OutOfMemoryError: bitmap size exceeds VM budget ?

    - by UMMA
    friends, i am using following code to display bitmap on screen and having next and previous buttons to change images. and getting out of memory error New Code HttpGet httpRequest = null; try { httpRequest = new HttpGet(mImage_URL[val]); } catch (Exception e) { return 0; } HttpClient httpclient = new DefaultHttpClient(); HttpResponse response = (HttpResponse) httpclient.execute(httpRequest); Bitmap bm; HttpEntity entity = response.getEntity(); BufferedHttpEntity bufHttpEntity = new BufferedHttpEntity(entity); InputStream is = bufHttpEntity.getContent(); try { bm = BitmapFactory.decodeStream(is); }catch(Exception ex) { } is.close(); Old Code URL aURL = new URL(mImage_URL[val]); URLConnection conn = aURL.openConnection(); conn.connect(); InputStream is = null; try { is= conn.getInputStream(); }catch(IOException e) { } BufferedInputStream bis = new BufferedInputStream(is); bm = BitmapFactory.decodeStream(bis); bis.close(); is.close(); img.setImageBitmap(bm); and it was giving me error decoder-decode return false. on images of size bigger than 400kb. so after googling i got new code as answer the old code was not giving me out of memory error on those images but decoder-decode return false, so i choosed new code. any one guide me what is the solution and which is the best approach to display live images?

    Read the article

  • Dealing with huge SQL resultset

    - by Dave McClelland
    I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory. For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable. My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time? Cheers, Dave McClelland

    Read the article

  • Help catching AV with WinDbg and ADPlus 7.0

    - by Stoune
    I want to catch Memory Access Violation in SQL Server Compact Edition like this described at http://debuggingblog.com/wp/2009/02/18/memory-access-violation-in-sql-server-compact-editionce/ The suggested config is: CRASH Quiet MyApp.exe NoDumpOnFirstChance clr;av FullDump gn I download latest Debugging Tools and observe what Microsoft rewrite adplus tool into managed code and change syntax of config File. I rewrite config file like this: <ADPlus Version="2"> <Settings> <RunMode>Crash</RunMode> <Option>Quiet</Option> <Option>NoDumpOnFirst</Option> <Sympath>c:\symbols\</Sympath> <OutputDir>c:\work\output\</OutputDir> <ProcessName>c:\work\app\output\MyApp.exe</ProcessName> </Settings> <Exceptions><!--to get the full dump on clr access violation--> <Exception Code="clr;av"> <Actions1>FullDump</Actions1> <ReturnAction1>gn</ReturnAction1> </Exception> </Exceptions> </ADPlus> And I get error "Couldn't find exception with code: clr;av". If I understand right It didn't load sos extension, but I can't find the right section and syntax that I should use to load it. adplus_old.vbs - for some reasons didn't launch process on Windows 7. WinDBG 6.12.0002.633 X86 ADPlus Engine Version: 7.01.002 02/27/2009 Maybe someone has a working example of config of debugging .NET app with latest adplus.exe?

    Read the article

  • Why is EXC_BAD_ACCESS so unhelpful?

    - by Dustin
    First let me say I come from a background in Flash/AS3, which I realize is not as strict about most things as iPhone/Objective-C. I suspect my question actually applies to AS3 as well, but let me ask it as pertaining to Obj-c. Why is the error EXC_BAD_ACCESS, and others like it, so unhelpful? I realize that it normally means mismanagement of memory somewhere, but why can't it tell you more about the problem. For instance why doesn't it say "EXC_BAD_ACCESS, you tried to pass pointer suchAndSuch on line 123, however you're an idiot, because you released it on line 69 so it's not available anymore"? I realize I can use the debugger to get more clues about where my error occurred, but many times this is only marginally helpful. For instance sometimes none of the messages in the stack/thread/whatever are even my code. Other times it is my code but on the top of the stack will be a message that has 4+ parameters, ok thanks debugger you narrowed it down to 4 possible pointers by why can't you just tell me which one!? I'm guessing there's just some fundamental explanation that I missed because of the background I came from, not needing to worry about memory and such. Although there is an error that can happen a lot in AS3 development that is equally mysterious and along the same lines. "Error #1009: Cannot access a property or method of a null object reference" which almost always means a variable you were expecting to be holding something is actually null. Why doesn't it tell me WHICH variable?!

    Read the article

  • calling CreateFile, specifying FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE.

    - by alexander-daniels
    Before I describe my problem, here is a description of the program I'm writting: This is a C++ application. The purpose of my program is to create file on RAM memory. I read that if specify FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE when creating file it will be loaded direct to the RAM memory. One of blogs that talk about is this one: http://blogs.msdn.com/larryosterman/archive/2004/04/19/116084.aspx I have built a mini-program, but it not achieves the goal. Instead, it creates a file on hard-drive on directory I specify. Here's my program: void main () { LPCWSTR str = L"c:\temp.txt"; HANDLE fh = CreateFile(str,GENERIC_WRITE,0,NULL,CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE,NULL); if (fh == INVALID_HANDLE_VALUE) { printf ("Could not open TWO.TXT"); return; } DWORD dwBytesWritten; for (long i=0; i<20000000; i++) { WriteFile(fh, "This is a test\r\n", 16, &dwBytesWritten, NULL); } return; } I think there problem in CreateFile function, but I can't fix it. Please help me.

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Handling exception from unmanaged dll in C#

    - by StuffHappens
    Hello. I have the following function written in C# public static string GetNominativeDeclension(string surnameNamePatronimic) { if(surnameNamePatronimic == null) throw new ArgumentNullException("surnameNamePatronimic"); IntPtr[] ptrs = null; try { ptrs = StringsToIntPtrArray(surnameNamePatronimic); int resultLen = MaxResultBufSize; int err = decGetNominativePadeg(ptrs[0], ptrs[1], ref resultLen); ThrowException(err); return IntPtrToString(ptrs, resultLen); } catch { return surnameNamePatronimic; } finally { FreeIntPtr(ptrs); } } Function decGetNominativePadeg is in unmanaged dll [DllImport("Padeg.dll", EntryPoint = "GetNominativePadeg")] private static extern Int32 decGetNominativePadeg(IntPtr surnameNamePatronimic, IntPtr result, ref Int32 resultLength); and throws an exception: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The catch that is in C# code doesn't actually catch it. Why? How to handle this exception? Thank you for your help! UPDATED private static IntPtr[] StringsToIntPtrArray(params string[] strs) { IntPtr[] ptrs = new IntPtr[strs.Length + 1]; for(int i = 0; i < ptrs.Length - 1; i++) ptrs[i] = StringToIntPtr(strs[i]); ptrs[ptrs.Length - 1] = Marshal.AllocHGlobal(m_MaxResultStringBufSize); return ptrs; }

    Read the article

  • OutOfMemoryError: Java heap space error when start solr

    - by Hamid
    Hi I start indexing DB articles with solr, but after add about 58 million article (and about 113 GB size of disk) , i get below error message on tomcat log error Note1: i already set Init memory pool to 256MB, and Max memory pool:1400MB to tomcat server. Note2: I can post or search article but must wait over 3 min for get response. 8-apr-2010 14:27:07 org.apache.solr.common.SolrException log SEVERE: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.util.PriorityQueue.initialize(PriorityQueue.java:89) at org.apache.lucene.search.HitQueue.<init>(HitQueue.java:67) at org.apache.lucene.search.TopScoreDocCollector.<init>(TopScoreDocCollector.java:113) at org.apache.lucene.search.TopScoreDocCollector.<init>(TopScoreDocCollector.java:37) at org.apache.lucene.search.TopScoreDocCollector$InOrderTopScoreDocCollector.<init>(TopScoreDocCollector.java:42) at org.apache.lucene.search.TopScoreDocCollector$InOrderTopScoreDocCollector.<init>(TopScoreDocCollector.java:40) at org.apache.lucene.search.TopScoreDocCollector.create(TopScoreDocCollector.java:100) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:979) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:574) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1527) at java.lang.Thread.run(Unknown Source) What's problem ? Have any suggestion ? Thanks in advanced

    Read the article

  • What components and IDE add-ins do you install with Delphi?

    - by Mick
    After a clean install of Delphi, what components and IDE add-ins do you make certain that you install? What's your Delphi "rig"? Here's what I install after a clean installation: Delphi 2007 JCL / JVCL - JEDI Code Library and JEDI Visual Code Library (600+ components) JWA / JWSCL - JEDI API Library & Security Code Library GExperts - GExperts is a free set of tools built to increase the productivity of Delphi and C++Builder programmers by adding several features to the IDE. TWM's experimental GExperts code formatter - adds code formatting capabilities to Delphi Virtual TreeView - Virtual Treeview is a treeview control built from ground up. More than 5 years of development made it one of the most flexible and advanced tree controls available today. MustangPeak Components (EasyList View, Virtual ShellTools, etc) - EasyListview is a control that has no dependance on the Microsoft Listview control but has all the features of the latest version from Microsoft. Also includes 'Explorer.exe' like shell components. Synapse lightweight networking components - contains simple low level non-visual objects for easy programming without problems. (no required multi-threaded synchronization, no need for windows message processing,…) Great for command line utilities, visual projects, NT services EurekaLog - EurekaLog is a complete bug resolution tool for Delphi and C++Builder developers that gives your application the power to catch every exception and memory leak, directly on the end user PC, generating a detailed log of the call stack (with file, class, method and line number), optionally sending you a copy of each log entry via email or to a web bug-tracker. DelphiSpeedUp - DelphiSpeedUp is an IDE plugin for Delphi and C++Builder. It improves the IDE’s startup speed and increases the general speed of the whole IDE. DDevExtensions - DDevExtensions extends the Delphi/C++Builder IDE by adding some new productivity features. IDE Fix Pack - The IDE Fix Pack installs is a DLL-Expert that fixes the following RAD Studio 2007 bugs at runtime. All changes are done in memory. No file on disk is modified. TPerlRegex - Regular Expression library for Delphi How about other Delphi developers?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >