Search Results

Search found 9124 results on 365 pages for 'general dba'.

Page 297/365 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Conditional macro expansion

    - by Dave DeLong
    Heads up: This is a weird question. I've got some really useful macros that I like to use to simplify some logging. For example I can do Log(@"My message with arguments: %@, %@, %@", @"arg1", @"arg2", @"arg3"), and that will get expanded into a more complex method invocation that includes things like self, _cmd, __FILE__, __LINE__, etc, so that I can easily track where things are getting logged. This works great. Now I'd like to expand my macros to not only work with Objective-C methods, but general C functions. The problem is the self and _cmd portions that are in the macro expansion. These two parameters don't exist in C functions. Ideally, I'd like to be able to use this same set of macros within C functions, but I'm running into problems. When I use (for example) my Log() macro, I get compiler warnings about self and _cmd being undeclared (which makes total sense). My first thought was to do something like the following (in my macro): if (thisFunctionIsACFunction) { DoLogging(nil, nil, format, ##__VA_ARGS__); } else { DoLogging(self, _cmd, format, ##__VA_ARGS__); } This still produces compiler warnings, since the entire if() statement is substituted in place of the macro, resulting in errors with the self and _cmd keywords (even though they will never be executed during function execution). My next thought was to do something like this (in my macro): if (thisFunctionIsACFunction) { #define SELF nil #define CMD nil } else { #define SELF self #define CMD _cmd } DoLogging(SELF, CMD, format, ##__VA_ARGS__); That doesn't work, unfortunately. I get "error: '#' is not followed by a macro parameter" on my first #define. My other thought was to create a second set of macros, specifically for use in C functions. This reeks of a bad code smell, and I really don't want to do this. Is there some way I can use the same set of macros from within both Objective-C methods and C functions, and only reference self and _cmd if the macro is in an Objective-C method?

    Read the article

  • Visual Studio opening .xml files in Notepad

    - by Portman
    So I'm happily working on a project making heavy use of custom .xml configuration files this morning. All of a sudden, whenever I double-click an .xml file in Solution Explorer, it opens in Notepad instead of within Visual Studio. Thinking that it was the Windows file associations, I right-clicked on a file in Explorer, selected Open With Choose Defaults, and selected Visual Studio 2008. But the problem remains -- now when I open a file from Explorer, Visual Studio Opens, then it opens Notepad. Needless to say, this is very frustrating, and Google is not much help. Has anyone else ever had this problem, and what did you do about it? Notes: This only happens for .xml files. Other text files (.config, .txt) open within Visual Studio just fine. This has nothing to do with Windows file associations, as Windows open up VS2008 just as it should. This is some crazy problem internal to Visual Studio. I've also tried Tools Options General Restore File Associations. No luck. Nothing present in Tools Options Text Editor File Extension This is what my "Open With" menu looks like for .xml files. As you can see, "XML Editor" is set to the default.

    Read the article

  • Discriminator based on joined property

    - by Andrew
    Suppose I have this relationship: abstract class Base { int Id; int JoinedId; ... } class Joined { int Id; int Discriminator; ... } class Sub1 : Base { ... } class Sub2 : Base { ... } for the following tables: table Base ( Id int, JoinedId int, ... ) table Joined ( Id int, Discriminator int, ... ) I would like to set up a table-per-hierarchy inheritance mapping for the Base, Sub1, Sub2 relationships, but using the Disciminator property from the Joined class as the discriminator. Here's the general idea for the mapping file: <class name="Base" table="Base"> <id name="Id"><generator class="identity"/></id> <discriminator /> <!-- ??? or <join> or <many-to-one>? --> <subclass name="Sub1" discriminator-value="1">...</subclass> <subclass name="Sub2" discriminator-value="2">...</subclass> </class> Is there any way of accomplishing something like this with the <discriminator>, <join>, or <many-to-one>? NHiberante seems to assume the discriminator is a column on the given table (which makes sense to me.. I know this is unorthodox). Thanks.

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • Threading Practice with Polling.

    - by Stacey
    I have a C# application that has to constantly read from a program; sometimes there is a chance it will not find what it needs, which will throw an exception. This is a limitation of the program it has to read from. This frequently causes the program to lock up as it tries to poll. So I solved it by spawning the 'polling' off into a separate thread. However watching the debugger, the thread is created and destroyed each time. I am uncertain if this is typical or not; but my question is, is this good practice, or am I using the threading for the wrong purpose? ProgramReader { static Thread oThread; public static void Read( Program program ) { // check to see if the program exists if ( false ) oThread = new ThreadStart(program.Poll); if(oThread != null || !oThread.IsAlive ) oThread.Start(); } } This is my general pseudocode. It runs every 10 seconds or so. Is this a huge hit to performance? The operation it performs is relatively small and lightweight; just repetitive.

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • How do I make a third party .jar available to my .jsp page?

    - by Matthew
    I'm just starting to learn JSP (and I'm pretty new to Java in general), and I'd like to use JSON-lib with it. I want to make a page something like this: <%@ page import="net.sf.json.JSONObject"%> <% String json = new JSONObject().put("hello", "world").toString(); out.println(json); %> I downloaded json-lib-2.3-jdk15.jar and put it in the same directory as the .jsp page. But I get this error org.apache.jasper.JasperException: Unable to compile class for JSP: An error occurred at line: 6 in the generated java file Only a type can be imported. net.sf.json.JSONObject resolves to a package An error occurred at line: 3 in the jsp file: /getCard.jsp JSONObject cannot be resolved to a type 1: <%@ page import="net.sf.json.JSONObject" %> 2: <% 3: String json = new JSONObject().put("hello", "world").toString(); 4: out.println(json); 5: %> 6: How do I make the JSONObject class available to my .jsp page?

    Read the article

  • Sphinx - Python modules, classes and functions documentation

    - by user343934
    Hi everyone, I am trying to document my small project through sphinx which im recently trying to get familiar with. I read some tutorials and sphinx documentation but couldn't make it. Setup and configurations are ok! just have problems in using sphinx in a technical way. My table of content should look like this --- Overview .....Contents ----Configuration ....Contents ---- System Requirements .....Contents ---- How to use .....Contents ---- Modules ..... Index ......Display ----Help ......Content Moreover my focus is on Modules with docstrings. Details of Modules are Directory:- c:/wamp/www/project/ ----- Index.py >> Class HtmlTemplate: .... def header(): .... def body(): .... def form(): .... def header(): .... __init_main: ##inline function ----- display.py >> Class MainDisplay: .... def execute(): .... def display(): .... def tree(): .... __init_main: ##inline function My Documentation Directory:- c:/users/abc/Desktop/Documentation/doc/ --- _build --- _static --- _templates --- conf.py --- index.rst I have added Modules directory to the system environment and edited index.rst with following codes just to test Table of content. But i couldn't extract docstring directly Index.rst T-Alignment Documentation The documentation covers general overview of the application covering functionalities and requirements in details. To know how to use application its better to go through the documentation. .. _overview: Overview .. _System Requirement: System Requirement Seq-alignment tools can be used in varied systems base on whether all intermediary applications are available or not like in Windows, Mac, Linux and UNIX. But, it has been tested on the Windows working under a beta version. System Applications Server .. _Configuration:: Configuration Basic steps in configuration involves in following categories Environment variables Apache setting .. _Modules:: Modules How can i continue from here... Moreover, i am just a beginner to sphinx documentation tool I need your suggestions to brings my modules docstring to my documentation page Thanks

    Read the article

  • Single developer, project organization

    - by poke
    I am looking for a good (and free) way to organize some of my personal projects. I am saying "organize" because I'm not sure, if the standard project management software solutions are exactly what I am looking for and especially something what I, as a single developer, need. In general, I just want to keep my progress of my projects organized in some way. I would like to be able to keep track of milestones and split those into multiple smaller tasks, so I can keep track of my progress. So some task/issue based system would probably be good, especially as I also want to keep track of issues/bugs with specific versions (although I alone will create those issues). I am and will be the only developer on those projects, so it doesn't matter if the software is offline or online, and I also don't need any collaboration features (like commenting on things, or assigning tasks to other developers etc). But if there is a good software that fits my needs, and in addition it has those things, I don't really care. After all it's easy enough to not use available features. Many online solutions also offer integrated code hosting. I am using git internally, but I don't plan to push any of the code, so such a feature is not needed either. In case of online solutions however I would like the projects to be closed to the public (some of the online utilities only offer open source projects for free and require payments for private projects). I have looked at some project management solutions already, I also read some similar questions here on SO. But given that I'm a single developer, my focus is probably a bit different as when others ask for a huge distributed software that supports many developers and different collaboration features. Some standard answers such as Trac (which also only supports one project), Redmine and FogBUGZ look interesting, but are a bit off my interest (although you may change my mind on that :P). Currently, I'm looking at Indefero which doesn't look too bad. But what do you think?

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • Curve fitting: Find a CDF (or any function) that satisfies a list of constraints.

    - by dreeves
    I have some constraints on a CDF in the form of a list of x-values and for each x-value, a pair of y-values that the CDF must lie between. We can represent that as a list of {x,y1,y2} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: And since this is a CDF there's an additional implicit constraint of {Infinity, 1, 1} Ie, the function must never exceed 1. Also, it must be monotone. Now, without making any assumptions about its functional form, we want to find a curve that respects those constraints. For example: (I cheated to get that one: I actually started with a nice log-normal distribution and then generated fake constraints based on it.) One possibility is a straight interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That sort of technically satisfies the constraints but it needs smoothing. We can increase the interpolation order but now it violates the implicit constraints (always less than one, and monotone): How can I get a curve that looks as much like the first one above as possible? Note that NonLinearModelFit with a LogNormalDistribution will do the trick in this example but is insufficiently general as sometimes there will sometimes not exist a log-normal distribution satisfying the constraints.

    Read the article

  • Can't get syntax on to work in my gvim.

    - by user198553
    (I'm new to Linux and Vim, and I'm trying to learn Vim but I'm having some issues with it that I can't seen do fix) I'm in a Linux installation (Ubuntu 8.04) that I can't update, using Vim 7.1.138. My vim installation is in /usr/share/vim/vim71/. /home/user/ My .vimrc file is in /home/user/.vimrc, as follows: fun! MySys() return "linux" endfun set runtimepath=~/.vim,$VIMRUTNTIME source ~/.vim/.vimrc And then, in my /home/user/.vim/.vimrc: " =============== GENERAL CONFIG ============== set nocompatible syntax on " =============== ENCODING AND FILE TYPES ===== set encoding=utf8 set ffs=unix,dos,mac " =============== INDENTING =================== set ai " Automatically set the indent of a new line (local to buffer) set si " smartindent (local to buffer) " =============== FONT ======================== " Set font according to system if MySys() == "mac" set gfn=Bitstream\ Vera\ Sans\ Mono:h13 set shell=/bin/bash elseif MySys() == "windows" set gfn=Bitstream\ Vera\ Sans\ Mono:h10 elseif MySys() == "linux" set gfn=Inconsolata\ 14 set shell=/bin/bash endif " =============== COLORS ====================== colorscheme molokai " ============== PLUGINS ====================== " -------------- NERDTree --------------------- :noremap ,n :NERDTreeToggle<CR> " =============== DIRECTORIES ================= set backupdir=~/.backup/vim set directory=~/.swap/vim ...fact is the command syntax on is not working, neither in vim or gvim. And the strange thing is: If I try to set the syntax using the gvim toolbat, it works. Then, in normal mode in gvim, after activating using the toolbar, using the code :syntax off, it works, and just after doing this trying to do :syntax on doesn't work!! What's going on? Am I missing something?

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Non-wiki CMS for an online user guide

    - by Russell Leggett
    For a large web application I'm building, I need to create an extensive user guide. The first thought was a wiki, but what I've seen lacks the ease of customization I've seen in CMSs, and has a lot of extra features I don't need. The number of users editing the document is small and closed, but it needs to be editable by non-technical users. The number of pages will likely be between 50-100. It also needs to be searchable. It would also be a plus if it had nice readable urls to link to from our web app. Right now, my best guess is WordPress, but that seems a lot more geared towards blogging with just a handful of pages, than having several pages, and possibly no blogs. There isn't a language requirement, although we have the most experience with Java and PHP. We aren't looking to have to do any major coding other than customizing for visuals, so hopefully the language will not be too important. Again, I'm not looking for the best general purpose CMS, just something that would be easiest for a user guide.

    Read the article

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • How to restrict text search to a certain subset of the database ?

    - by Nikhil Garg
    I have a large central database of around 1 million heavy records. In my app, for every user I would have a subset of rows from central table, which would be very small (probably 100 records each).When a particular user has logged in , I would want to search on this data set only. Example: Say I have a central database of all cars in the world. I have a user profile for General Motors(GM) , Ferrari etc. When GM is logged in I just want to search(a full text search and not fire a sql query) for those cars which are manufactured by GM. Also GM may launch/withdraw a model in which case central db would be updated & so would be rowset associated with GM. In case of acquisitions, db of certain profiles may change without launch/removal of new car. So central db wont change then , but rowsets may. Whats the best way to implement such a design ? These smaller row sets would need to be dynamic depending on user activities. We are on Rails 2.3.5 and use thinking_sphinx as the connector and Sphinx/MySQL for search and relational associations.

    Read the article

  • MS Query Analizer/Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analizer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • Symfony: How to hide form fields from display and then set values for them in the action class

    - by Tom
    I am fairly new to symfony and I have 2 fields relating to my table "Pages"; created_by and updated_by. These are related to the users table (sfGuardUser) as foreign keys. I want these to be hidden from the edit/new forms so I have set up the generator.yml file to not display these fields: form: display: General: [name, template_id] Meta: [meta_title, meta_description, meta_keywords] Now I need to set the fields on the save. I have been searching for how to do this all day and tried a hundred methods. The method I have got working is this, in the actions class: protected function processForm(sfWebRequest $request, sfForm $form) { $form_params = $request->getParameter($form->getName()); $form_params['updated_by'] = $this->getUser()->getGuardUser()->getId(); if ($form->getObject()->isNew()) $form_params['created_by'] = $this->getUser()->getGuardUser()->getId(); $form->bind($form_params, $request->getFiles($form->getName())); So this works. But I get the feeling that ideally I shouldnt be modifying the web request, but instead modifying the form/object directly. However I havent had any success with things like: $form->getObject()->setUpdatedBy($this->getUser()->getGuardUser()); If anyone could offer any advice on the best ways about solving this type of problem I would be very grateful. Thanks, Tom

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • Do I need Response.End() in ASP.Net 2.0

    - by Hamish Grubijan
    Hi, I am just starting with ASP.Net. I copied a ex-co-worker's code (from .Net 1.1 era) and it has a Response.End(); in case of an error. There is also a: catch (Exception ex) { Response.Write(ex.Message); Response.End(); } at the end of Page_Load(object sender, System.EventArgs e) which always appends "Thread was aborted." or something like that at the end. I suspect that this worked differently before, or the error conditions were not tested very well. Anyhow, I was able to stop using Response.End(); in case when I do not like the GET parameters, and use return; instead. It seemed to do the right think in a simple case. Is this Ok in general? There are some problems with the code I copied, but I do not want to do a rewrite; I just want to get it running first and find wrinkles later. The Response.End(); caused a mental block for me, however, so I want to figure it out. I want to keep the catch all clause just in case, at least for now. I could also end the method with: catch (System.Threading.ThreadAbortException) { Response.End(); } catch (Exception ex) { Response.Write(ex.Message); Response.End(); } but that just seems extremely stupid, once you think about all of the exceptions being generated. Please give me a few words of wisdom. Feel free to ask if something is not clear. Thanks! P.S. Ex-coworker was not fired and is a good coder - one more reason to reuse his example.

    Read the article

  • libav/ffmpeg: avcodec_decode_video2() returns -1 when separating demultiplexing and decoding

    - by unbekannt
    I'm using libav (from a C++ program on Linux and Windows) to decode video streams from a file, which works fine (decoding various formats like H264 and MPEG2) using avformat_open_input(), av_read_frame() and avcodec_decode_video2(). Now I have to separate demultiplexing and decoding. One class will call avformat_open_input() and av_read_frame() and then pass the AVPackets into a queue that is read by another class. There I use avcodec_alloc_context3() to get the AVCodecContext needed for avcodec_decode_video2(). I've tested that with a MPEG2 video stream and it works. Problems arise if I try to decode a H264 stream: avcodec_decode_video2() always returns -1 and outputs "no frame". I understand that additional data (SPS/PPS) is needed to decode this stream, so I've tried to replicate the original AVCodecContext from the demultiplexer in the decoder, but it won't work: Copying the content of the extradata field and setting all other values that differ from the default ones in the decoder: -1 is returned Using the same context (i.e. passing along the pointer) results in a crash I also tried to set CODEC_FLAG2_CHUNKS. avcodec_decode_video2() then always returns packet.size - 3 (??) and frameFinished is never set to 1. In my opinion I have a general problem here that will arise whenever settings from the original CodecContext are needed to decode the AVPackets. I'd be grateful for any hints on how to solve that problem!

    Read the article

  • Detect TCP connection close when playing Flash video

    - by JoJo
    On the Flash client side, how do I detect when the server purposely closes the TCP connection to its video stream? I'll need to take action when this occurs - maybe attempt to restart the video or display an error message. Currently, the connection closing and the connection being slow look the same to me. The NetStream object ushers a NetStream.Play.Stop event in both cases. When the connection is slow, it usually recovers by itself within seconds. I wish to only take action when the connection is closed, not when it is slow. Here's how my general setup looks like. It's the basic NetConnection-NetStream-Video setup. this.vidConnection = new NetConnection(); this.vidConnection.addEventListener(AsyncErrorEvent.ASYNC_ERROR, this.connectionAsyncError); this.vidConnection.addEventListener(IOErrorEvent.IO_ERROR, this.connectionIoError); this.vidConnection.addEventListener(NetStatusEvent.NET_STATUS, this.connectionNetStatus); this.vidConnection.connect(null); this.vidStream = new NetStream(this.vidConnection); this.vidStream.addEventListener(AsyncErrorEvent.ASYNC_ERROR, this.streamAsyncError); this.vidStream.addEventListener(IOErrorEvent.IO_ERROR, this.streamIoError); this.vidStream.addEventListener(NetStatusEvent.NET_STATUS, this.streamNetStatus); this.vid.attachNetStream(this.vidStream); None of the error events fire when the server closes the TCP or when the connection freezes up. Only the NetStream.Play.Stop event fires. Here's a trace of what happens from initially playing the video to the TCP connection closing. connection net status = NetConnection.Connect.Success playStream(http://192.168.0.44/flv/4d29104a9aefa) NetStream.Play.Start NetStream.Buffer.Flush NetStream.Buffer.Full NetStream.Buffer.Empty checkDimensions 0 0 onMetaData NetStream.Buffer.Full NetStream.Buffer.Flush checkDimensions 960 544 NetStream.Buffer.Empty NetStream.Buffer.Flush NetStream.Play.Stop

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >