Search Results

Search found 1587 results on 64 pages for 'deep saurabh'.

Page 58/64 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Looking for an Open Source Project in need of help

    - by hvidgaard
    Hi StackOverflow! I'm a CS student on well on my way to graduate. I have had a difficult time of finding relevant student jobs (they seems to be taken merely hours after the notice gets on the board) , so instead I'm looking for an open source project in need of help. I'm aware that I should choose one that I use, but I'm not aware of any OS-project that I use that needs help. That's why I'm asking you. I don't have any deep experience, but I here are some of my biggest projects so far: BitTorrent-ish client in Python (a subset of BitTorrent) HTTP 1.1 webserver in Java Compiler from a subset of Java to run on JRE Flash-framework project to model an iPad look and feel (not to run actual iPad programs) complete with an API for programs. Complete MySQL database for a booking system, with departure and arrival times, so you could only book valid tickets (with a Java frontend). I know, Java and languages like AS3 and C# feels natural per se, Python, and have done a fair bit of hacking around in C, but I don't feel very comfortable with it. Mostly I'm afraid to make a fuckup because I have such a high degree of control. I would like to think I'm well aware of good software design practices, but in reality what I do is ask myself "would I like to use/maintain this?", and I love to refactor my code because I see optimizations. I love algorithms and to make them run in the best possible time. I don't have any preferred domain to work in, but I wouldn't mind it to be graphics or math heavy. Ideally I'm looking for a project in C++ to learn the in's and out's of it, but I'm well aware that I don't know that language very well. I would like to have a mentor-like figure until I'm confident enough to stand on my own, not one to review all my code (I'm sure someone will to start with anyway), but to ask questions about the project and language in question. I do have a wife and two children, so don't expect me to put in 10+ hours every week. In return I can work on my own, I strive to program modular and maintainable code. Know how to read an API, use Google, StackOverflow and online resources in general. If you have any questions, shoot. I'm looking forward to your suggestions.

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • How to parse an XML string in TDI

    - by ongle
    I am new to TDI. I have a TDI assembly line that calls a web service (ibmdi.InvokeSoapWS) which returns the result as a string in the work attribute 'xmlString'. I then have an AttributeMap that attempts to parse the xml and extract a value (the node it seeks is a few nodes deep). var parser = system.getParser('ibmdi.SOAP'); var xmlString = work.getString('xmlString'); var entity = parser.parseRequest(xmlString); task.dump(entity); The trouble is, the parsed object does not contain an accurate representation of the XML. It contains only two attributes, the first is correctly the first node following the soap body (e.g., ns0:SomeNodeReply), the second being a node inside the first (e.g., ns0:DetailCount). And as far as I can determine, both attributes are just strings so I cannot recurse into the object graph. Below is a sample soap reply: <?xml version="1.0" encoding="utf-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <ns0:SomeNodeReply xmlns:ns0="http://xmlns.example.com/unique/default/namespace/1136581686664"> <ns0:Status> <ns0:StatusCD>000</ns0:StatusCD> <ns0:StatusDesc /> </ns0:Status> <ns0:DetailCount>1</ns0:DetailCount> <ns0:SomeDetail> <ns0:CodeA>Foo</ns0:CodeA> <ns0:CodeB>Bar</ns0:CodeB> </ns0:SomeDetail> </ns0:SomeNodeReply> </SOAP-ENV:Body> </SOAP-ENV:Envelope> And below is a sample dump of the parsed string: 19:03:23 CTGDIS003I *** Start dumping Entry 19:03:23 Operation: generic 19:03:23 Entry attributes: 19:03:23 SOAP_CALL (replace): 'ns0:SomeNodeReply' 19:03:23 ns0:DetailCount(replace): '1' 19:03:23 CTGDIS004I *** Finished dumping Entry All I really need to do is be able to parse out a value that may or may not be there, depending on the value of another node (e.g., DetailCount == 1, get CodeA otherwise return empty string). I am open to changing anything about how this works if I can extract the data into the work Entry.

    Read the article

  • Exception showing a erroneous web page in a WPF frame

    - by H4mm3rHead
    I have a small application where i need to navigate to an url, I use this method to get the Frame: public override System.Windows.UIElement GetPage(System.Windows.UIElement container) { XmlDocument doc = new XmlDocument(); doc.Load(Location); string webSiteUrl = doc.SelectSingleNode("website").InnerText; Frame newFrame = new Frame(); if (!webSiteUrl.StartsWith("http://")) { webSiteUrl = "http://" + webSiteUrl; } newFrame.Source = new Uri(webSiteUrl); return newFrame; } My problem is now that the page im trying to show generates a error (or so i think), when i load the page in a browser it never fully loads, keeps saying "loading1 element" in the load bar and the green progress line (IE 8) keeps showing. When i attach my debugger i get this error: System.ArgumentException was unhandled Message="Parameter and value pair is not valid. Expected form is parameter=value." Source="WindowsBase" StackTrace: at MS.Internal.ContentType.ParseParameterAndValue(String parameterAndValue) at MS.Internal.ContentType..ctor(String contentType) at MS.Internal.WpfWebRequestHelper.GetContentType(WebResponse response) at System.Windows.Navigation.NavigationService.GetObjectFromResponse(WebRequest request, WebResponse response, Uri destinationUri, Object navState) at System.Windows.Navigation.NavigationService.HandleWebResponse(IAsyncResult ar) at System.Windows.Navigation.NavigationService.<>c__DisplayClassc.<HandleWebResponseOnRightDispatcher>b__8(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) ved System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at GreenWebPlayerWPF.App.Main() i C:\Development\Hvarregaard\GWDS\GreenWeb\GreenWebPlayerWPF\obj\Debug\App.g.cs:linje 0 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: Anyone? Or any way to capture it and respond to it, tried a try/catch around my code, but its not caught - seems something deep inside the guts of the CLR is failing.

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Safari Frames Invisible Scrollbar

    - by mobiuschic42
    I'm working on a website that uses not just frames, but frames within frames (ew, I know, but I don't get to choose). It actually works OK most of the time, but I'm running into a problem with some of the frames within frames in Safari (only). Some of the two-deep frames render in Safari with a small space on the right-hand side of the frame - I think it's just the ones with scroll set to "no", but fiddling with the scroll settings hasn't fixed it yet. It basically looks like there should be a scroll bar there, but there isn't. I've been working on this awhile and tried a lot of things: changing the heights of the rows, changing the scroll settings, adding a colls='100%' tag, changing the heights of the contents of the frames, as well as checking to make sure widths are set to 100% throughout. Nothing's fixed it so far. Does any one know what's happening here? Here's the basic gist of the code and some screenshots - please forgive the lack of proper quotes; it still renders and fixing them all in this codebase would be a losing battle: <html> <frameset id=fset frameborder=0 border=0 framespacing=0 onbeforeunload="onAppClosing()" onload="onAppInit()" rows="125px,*,0"> <frame src="navFrame.html" name=ControlPanel marginwidth=0 marginheight=0 frameborder=0 scrolling=no noresize> <frame src="contentFrame.html" name=C marginwidth=0 marginheight=0 frameborder=0 scrolling=no> <frame src="invisiFrame.html" name=PING marginwidth=0 marginheight=0 frameborder=0 noresize> <noframes><body>Tough luck.</center></body></noframes> </frameset></html> Inside that second frame (named "C" and with src of "contentFrame") is this: <HTML> <HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8"></head> <frameset rows="48px,*,28px" border=0 frameborder=0 framespacing=0> <frame src="pageTitle.html" name=Title marginwidth=0 marginheight=0 noresize scrolling=no frameborder=0> <frame src="content.html" name=ScreenBody marginwidth=0 marginheight=0 frameborder=0> <frame src="submitBar.html" name=ContextPanel marginwidth=0 marginheight=0 frameborder=0 scrolling=no noresize> </FRAMESET> </HTML> The frames that are troublesome are the first frame (named "Title" with src of "pageTitle.html") and the last frame (named "ContextPanel" with src of "submitBar.html") both have their widths set to 100% and heights are either 100%, not set, or a value less than or equal to their row height. Here is an image of the problem:

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • Commitment to Zend Framework - any arguments against?

    - by Pekka
    I am refurbishing a big CMS that I have been working on for quite a number of years now. The product itself is great, but some components, the Database and translation classes for example, need urgent replacing - partly self-made as far back as 2002, grown into a bit of a chaos over time, and might have trouble surviving a security audit. So, I've been looking closely at a number of frameworks (or, more exactly, component Libraries, as I do not intend to change the basic structure of the CMS) and ended up with liking Zend Framework the best. They offer a solid MVC model but don't force you into it, and they offer a lot of professional components that have obviously received a lot of attention (Did you know there are multiple plurals in Russian, and you can't translate them using a simple ($number == 0) or ($number > 1) switch? I didn't, but Zend_Translate can handle it. Just to illustrate the level of thorougness the library seems to have been built with.) I am now literally at the point of no return, starting to replace key components of the system by the Zend-made ones. I'm not really having second thoughts - and I am surely not looking to incite a flame war - but before going onward, I would like to step back for a moment and look whether there is anything speaking against tying a big system closely to Zend Framework. What I like about Zend: As far as I can see, very high quality code Extremely well documented, at least regarding introductions to how things work (Haven't had to use detailed API documentation yet) Backed by a company that has an interest in seeing the framework prosper Well received in the community, has a considerable user base Employs coding standards I like Comes with a full set of unit tests Feels to me like the right choice to make - or at least, one of the right choices - in terms of modern, professional PHP development. I have been thinking about encapsulating and abstracting ZF's functionality into own classes to be able to switch frameworks more easily, but have come to the conclusion that this would not be a good idea because: it would be an unnecessary level of abstraction it could cost performance the big advantage of using a framework - the existence of a developer base that is familiar with its components - would partly be cancelled out therefore, the commitment to ZF would be a deep one. Thus my question: Is there anything substantial speaking against committing to the Zend Framework? Do you have insider knowledge of plans of Zend Inc.'s to go evil in 2011, and make it a closed source library? Is Zend Inc. run by vampires? Are there conceptual flaws in the code base you start to notice when you've transitioned all your projects to it? Is the appearance of quality code an illusion? Does the code look good, but run terribly slow on anything below my quad-core workstation?

    Read the article

  • Moving from Windows to Ubuntu.

    - by djzmo
    Hello there, I used to program in Windows with Microsoft Visual C++ and I need to make some of my portable programs (written in portable C++) to be cross-platform, or at least I can release a working version of my program for both Linux and Windows. I am total newcomer in Linux application development (and rarely use the OS itself). So, today, I installed Ubuntu 10.04 LTS (through Wubi) and equipped Code::Blocks with the g++ compiler as my main weapon. Then I compiled my very first Hello World linux program, and I confused about the output program. I can run my program through the "Build and Run" menu option in Code::Blocks, but when I tried to launch the compiled application externally through a File Browser (in /media/MyNTFSPartition/MyProject/bin/Release; yes, I saved it in my NTFS partition), the program didn't show up. Why? I ran out of idea. I need to change my Windows and Microsoft Visual Studio mindset to Linux and Code::Blocks mindset. So I came up with these questions: How can I execute my compiled linux programs externally (outside IDE)? In Windows, I simply run the generated executable (.exe) file How can I distribute my linux application? In Windows, I simply distribute the executable files with the corresponding DLL files (if any) What is the equivalent of LIBs (static library) and DLLs (dynamic library) in linux and how to use them? In Windows/Visual Studio, I simply add the required libraries to the Additional Dependencies in the Project Settings, and my program will automatically link with the required static library(-ies)/DLLs. Is it possible to use the "binary form" of a C++ library (if provided) so that I wouldn't need to recompile the entire library source code? In Windows, yes. Sometimes precompiled *.lib files are provided. If I want to create a wxWidgets application in Linux, which package should I pick for Ubuntu? wxGTK or wxX11? Can I run wxGTK program under X11? In Windows, I use wxMSW, Of course. If question no. 4 is answered possible, are precompiled wxX11/wxGTK library exists out there? Haven't tried deep google search. In Windows, there is a project called "wxPack" (http://wxpack.sourceforge.net/) that saves a lot of my time. Sorry for asking many questions, but I am really confused on these linux development fundamentals. Any kind of help would be appreciated =) Thanks.

    Read the article

  • WinForm-style Invoke() in unmanaged C++

    - by Matt Green
    I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.) However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things. A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function. By nicer, I'll mention that I'm specifically trying to avoid code of the following form: if(! this->IsUIThread()) { Invoke(MainWindowPresenter::OnTracksAdded, e); return; } being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it. My implementation consists of: DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries. UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage(). Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them. Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?

    Read the article

  • How to approach copying objects with smart pointers as class attributes?

    - by tomislav-maric
    From the boost library documentation I read this: Conceptually, smart pointers are seen as owning the object pointed to, and thus responsible for deletion of the object when it is no longer needed. I have a very simple problem: I want to use RAII for pointer attributes of a class that is Copyable and Assignable. The copy and assignment operations should be deep: every object should have its own copy of the actual data. Also, RTTI needs to be available for the attributes (their type may also be determined at runtime). Should I be searching for an implementation of a Copyable smart pointer (the data are small, so I don't need Copy on Write pointers), or do I delegate the copy operation to the copy constructors of my objects as shown in this answer? Which smart pointer do I choose for simple RAII of a class that is copyable and assignable? (I'm thinking that the unique_ptr with delegated copy/assignment operations to the class copy constructor and assignment operator would make a proper choice, but I am not sure) Here's a pseudocode for the problem using raw pointers, it's just a problem description, not a running C++ code: // Operation interface class ModelOperation { public: virtual void operate = (); }; // Implementation of an operation called Special class SpecialModelOperation : public ModelOperation { private: // Private attributes are present here in a real implementation. public: // Implement operation void operate () {}; }; // All operations conform to ModelOperation interface // These are possible operation names: // class MoreSpecialOperation; // class DifferentOperation; // Concrete model with different operations class MyModel { private: ModelOperation* firstOperation_; ModelOperation* secondOperation_; public: MyModel() : firstOperation_(0), secondOperation_(0) { // Forgetting about run-time type definition from input files here. firstOperation_ = new MoreSpecialOperation(); secondOperation_ = new DifferentOperation(); } void operate() { firstOperation_->operate(); secondOperation_->operate(); } ~MyModel() { delete firstOperation_; firstOperation_ = 0; delete secondOperation_; secondOperation_ = 0; } }; int main() { MyModel modelOne; // Some internal scope { // I want modelTwo to have its own set of copied, not referenced // operations, and at the same time I need RAII to work for it, // as soon as it goes out of scope. MyModel modelTwo (modelOne); } return 0; }

    Read the article

  • 'Scanner' does not name a type error in g++

    - by Max
    Hi. I'm trying to compile code in g++ and I get the following errors: In file included from scanner.hpp:8, from scanner.cpp:5: parser.hpp:14: error: ‘Scanner’ does not name a type parser.hpp:15: error: ‘Token’ does not name a type Here's my g++ command: g++ parser.cpp scanner.cpp -Wall Here's parser.hpp: #ifndef PARSER_HPP #define PARSER_HPP #include <string> #include <map> #include "scanner.hpp" using std::string; class Parser { // Member Variables private: Scanner lex; // Lexical analyzer Token look; // tracks the current lookahead token // Member Functions <some function declarations> }; #endif and here's scanner.hpp: #ifndef SCANNER_HPP #define SCANNER_HPP #include <iostream> #include <cctype> #include <string> #include <map> #include "parser.hpp" using std::string; using std::map; enum { // reserved words BOOL, ELSE, IF, TRUE, WHILE, DO, FALSE, INT, VOID, // punctuation and operators LPAREN, RPAREN, LBRACK, RBRACK, LBRACE, RBRACE, SEMI, COMMA, PLUS, MINUS, TIMES, DIV, MOD, AND, OR, NOT, IS, ADDR, EQ, NE, LT, GT, LE, GE, // symbolic constants NUM, ID, ENDFILE, ERROR }; class Token { public: int tag; int value; string lexeme; Token() {tag = 0;} Token(int t) {tag = t;} }; class Num : public Token { public: Num(int v) {tag = NUM; value = v;} }; class Word : public Token { public: Word() {tag = 0; lexeme = "default";} Word(int t, string l) {tag = t; lexeme = l;} }; class Scanner { private: int line; // which line the compiler is currently on int depth; // how deep in the parse tree the compiler is map<string,Word> words; // list of reserved words and used identifiers // Member Functions public: Scanner(); Token scan(); string printTag(int); friend class Parser; }; #endif anyone see the problem? I feel like I'm missing something incredibly obvious.

    Read the article

  • VS 2010 IDE 2GB limt

    - by user561732
    I am using VS 2010 on a win 7 64 bit system with 8 GB of memory. My application is 32 bit. While in the VS 2010 .Net IDE, the app shows up in the Windows task manager as "MyApp.vshost.exe *32" while the VS IDE itself shows up as "devenv.exe *32". I checked and it appears that the VS 2010 IDE file (devenv.exe) is complied with the /LargeAddressAware flag. However, when debugging large models, the IDE fails with an Out of memory exception. In the Windows Task manager, the "MyApp.vshost.exe *32" process indicates about 1400 MB of memory usage (while the "devenv.exe *32" process is well under 500 MB). Is it possible to set the "MyApp.vshost.exe *32" process to be /LargeAddressAware in order to avoid this out of memory situation? If so, how can this be done in the IDE. While setting the final application binary to be /LargeAddressAware would work, I still need to be able to debug the app in the IDE with these type of large models. I should also note that my app has a deep object hierarchy with many collections that together required a lot of memory. However, my issue is not related to trying to create say 1 large array that requires greater then 2 GB of memory etc. I should note that I am able to run the same app in the VB6 IDE and not get an out of memory situation as long as the VB6 IDE is made /LargeAddressAware. In the case of VB6, the IDE and the app being debugged are part of the same process (and not split into 2 as is the case with VS 2010.) The VB6 process can be larger then 3 GB without running into out of memory issues. Ultimately, my objective is to have my app run completely in 64 bit to access more memory. I am hoping that in such cases, the IDE will allow the debugging process to exceed 2 GB without crashing (and certainly more then 1.4 GB as is the current case). However, for now, while 95% of my app is 64 bit, I am calling a legacy COM 32 bit DLL and as such, my entire app is forced to still run in 32 bit mode until I replace that DLL.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    Hi. I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • How to setup linux permissions for the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64  | Next Page >