Search Results

Search found 5148 results on 206 pages for 'excel macro'.

Page 181/206 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • where did the _syscallN macros go in <linux/unistd.h>?

    - by Evan Teran
    It used to be the case that if you needed to make a system call directly in linux without the use of an existing library, you could just include <linux/unistd.h> and it would define a macro similar to this: #define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \ type name(type1 arg1,type2 arg2,type3 arg3) \ { \ long __res; \ __asm__ volatile ("int $0x80" \ : "=a" (__res) \ : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \ "d" ((long)(arg3))); \ if (__res>=0) \ return (type) __res; \ errno=-__res; \ return -1; \ } Then you could just put somewhere in your code: _syscall3(ssize_t, write, int, fd, const void *, buf, size_t, count); which would define a write function for you that properly performed the system call. It seems that this system has been superseded by something (i am guessing that "[vsyscall]" page that every process gets) more robust. So what is the proper way (please be specific) for a program to perform a system call directly on newer linux kernels? I realize that I should be using libc and let it do the work for me. But let's assume that I have a decent reason for wanting to know how to do this :-).

    Read the article

  • How do I make VC++'s debugger break on exceptions?

    - by Mason Wheeler
    I'm trying to debug a problem in a DLL written in C that keeps causing access violations. I'm using Visual C++ 2008, but the code is straight C. I'm used to Delphi, where if an exception occurs while running under the debugger, the program will immediately break to the debugger and it will give you a chance to examine the program state. In Visual C++, though, all I get is a message in the Output tab: First-chance exception at blah blah blah: Access violation reading location 0x04410000. No breaks, nothing. It just goes and unwinds the stack until it's back in my Delphi EXE, which recognizes something's wrong and alerts me there, but by that point I've lost several layers of call stack and I don't know what's going on. I've tried other debugging techniques, but whatever it's doing is taking place deep within a nested loop inside a C macro that's getting called more than 500 times, and that's just a bit beyond my skill (or my patience) to trace through. I figure there has to be some way to get the "first-chance" exception to actually give me a "chance" to handle it. There's probably some "break immediately on first-chance exceptions" configuration setting I don't know about, but it doesn't seem to be all that discoverable. Does anyone know where it is and how to enable it?

    Read the article

  • How to return array of C++ objects from a PHP extension

    - by John Factorial
    I need to have my PHP extension return an array of objects, but I can't seem to figure out how to do this. I have a Graph object written in C++. Graph.getNodes() returns a std::map<int, Node*>. Here's the code I have currently: struct node_object { zend_object std; Node *node; }; zend_class_entry *node_ce; then PHP_METHOD(Graph, getNodes) { Graph *graph; GET_GRAPH(graph, obj) // a macro I wrote to populate graph node_object* n; zval* node_zval; if (obj == NULL) { RETURN_NULL(); } if (object_init_ex(node_zval, node_ce) != SUCCESS) { RETURN_NULL(); } std::map nodes = graph-getNodes(); array_init(return_value); for (std::map::iterator i = nodes.begin(); i != nodes.end(); ++i) { php_printf("X"); n = (node_object*) zend_object_store_get_object(node_zval TSRMLS_CC); n-node = i-second; add_index_zval(return_value, i-first, node_zval); } php_printf("]"); } When i run php -r '$g = new Graph(); $g->getNodes();' I get the output XX]Segmentation fault meaning the getNodes() function loops successfully through my 2-node list, returns, then segfaults. What am I doing wrong?

    Read the article

  • How best to deal with warning c4305 when type could change?

    - by identitycrisisuk
    I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being: warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real' When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible. One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code. Another is to create some macro based on the accuracy compiler flag like: #if OGRE_DOUBLE_PRECISION #define INIT_REAL(x) (x) #else #define INIT_REAL(x) static_cast<float>( x ) #endif which would require changing all the variable initialisation done so far but at least it would be future proof. Any preferences or something I haven't thought of?

    Read the article

  • How do I add code automatically to a derived function in C++

    - by Ian
    I have code that's meant to manage operations on both a networked client and a server, since there is significant overlap between the two. However, there are a few functions here and there that are meant to be exclusively called by the client or server, and accidentally calling a client function on the server (or vice versa) is a significant source of bugs. To reduce these sorts of programming errors, I'm trying to tag functions so that they'll raise a ruckus if they're misused. My current solution is a simple macro at the start of each function that calls an assert if the client or server accesses members they shouldn't. However, this runs into problems when there are multiple derived instances of classes, in that I have to tag the implementation as client or server side in EVERY child class. What I'd like to be able to do is put a tag in the virtual member's signature in the base class, so that I only have to tag it once and not run into errors by forgetting to do it repeatedly. I've considered putting a check in a base class implementation and then referring to it with something like base::functionName, but that runs into the same issue as far as needing to manually add the function call to every implementation. Ideally, I'd be able to have parent versions of the function called automatically like default constructors do. Does anybody know how to achieve something like this in C++? Is there an alternate approach I should be considering? Thanks!

    Read the article

  • Issue with plotting daily data using ggplot

    - by user1723765
    I tried to plot daily data from 9 variables in ggplot, but the graph I get cannot handle the date variable properly. The x axis is unreadable and its impossible to read the plot. I'm guessing there's an issue with the handling of dates. Here's the data: https://dl.dropbox.com/u/22681355/su.csv Here's the code I've been using: su=read.csv(file="su.csv", head=TRUE) meltdf=melt(su) ggplot(meltdf, aes(x=Date, y=value, colour=variable, group=variable))+geom_line() and here's the output: https://dl.dropbox.com/u/22681355/output.jpg here's the same plot done in excel, why does it look completely different?

    Read the article

  • Differences in MS Office Charts

    - by simendsjo
    I'm about to do some Office integration creating charts from some data-sources and adding them to PPT slides. But some coworkers are saying using PPT charts is suboptimal as they are missing features of Excel charts, and are different in many ways. They're unable to come up with examples, and so am I... I found the following blog about Office2007, saying there are some differences in the programming model, but that they all use the same underlying engine. Are there really any differences in the capabilities of the charts? Is it mostly UI issues? What features are different/missing from PPT charts? Are these issues resolved in Office2010?

    Read the article

  • Buy or build tool for Data Reporting ?

    - by Manoj
    We have been asked to provide a data reporting solution. The followng are the requirements: i. The client has a lot of data which is generated everyday as an outcome of the tests they run. These tests are run at several sites and they get automatically backed up into a central server. ii. They already have perl scripts which post process them and generates excel based reports. iii. They need a web based interface for comparing those reports and they need to mark and track issues which might be present in those data. I am confused if we should build our own tool for this or we should go for already exiting tool(any suggestions?). Can you please provide supportive arguments for the decision that you would suggest?

    Read the article

  • What's a good way to provide additional decoration/metadata for Python function parameters?

    - by Will Dean
    We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment. We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments. So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times. We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function. However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description. So, the question is, what are good 'pythonic' ways of adding this sort of information to a function. The two possibilities I have thought of are: Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec) Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata. Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.

    Read the article

  • What do you mean by the expressiveness in programming lanuguage?

    - by prosseek
    I see a lot of the word 'expressiveness' when people want to stress one language is better than the other. But I don't see exactly what they mean by it. Is it the verboseness/succinctness? I mean, if one language can write down something shorter than the other, does that mean expressiveness? Please refer to my other question - http://stackoverflow.com/questions/2411772/article-about-code-density-as-a-measure-of-programming-language-power Is it the power of the language? Paul Graham says that one language is more powerful than the other language in a sense that one language can do that the other language can't do (for example, LISP can do something with macro that the other language can't do). Is it just something that makes life easier? Regular expression can be one of the examples. Is it a different way of solving the same problem: something like SQL to solve the search problem? What do you think about the expressiveness of a programming lanuage? Can you show the expressiveness using some code? What's the relationship with the expressiveness and DSL? Do people come up with DSL to get the expressiveness?

    Read the article

  • Recommendations for C# controls that can be used to create a hierarchical list of prioritised items?

    - by Mendokusai
    I need to be able to display and edit a hierarchical list of tasks in a C# app. It can either be a Windows form app, or ASP.NET. Basically, I want similar behaviour to the way Microsoft Project handles tasks. The control would need to: 1) Maintain a list of items made up of several fields 2) Each item can have a number of children (at least 3 levels of nesting) 3) It needs to be very easy to change the parents/children of an item 4) It needs to be very easy to edit the fields (as fast as changing cells in Excel) 5) It needs to be very easy to reorder the items by dragging and dropping or cut and paste 6) If I can easily connect the control to a database, even better Anyone got any recommendations for controls for me to look at?

    Read the article

  • C++ meta-splat function

    - by aaa
    hello. Is there an existing function (in boost mpl or fusion) to splat meta-vector to variadic template arguments? for example: splat<vector<T1, T2, ...>, function>::type same as function<T1, T2, ...> my search have not found one, and I do not want to reinvent one if it already exists. edit: after some tinkering, apparently it's next to impossible to accomplish this in general way, as it would require declaring full template template parameter list for all possible cases. only reasonable solution is to use macro: #define splat(name, function) \ template<class T, ...> struct name; \ template<class T> \ struct name<T,typename boost::enable_if_c< \ result_of::size<T>::value == 1>::type> { \ typedef function< \ typename result_of::value_at_c<T,0>::type \ > type; \ }; Oh well. thank you

    Read the article

  • Use LINQ to SQL results inside SQL Server stored procedure

    - by ifwdev
    Note: I'm not trying to call a SQL Server stored proc using a L2SQL datacontext. I use LINQPad for some fairly complex "reporting" that takes L2SQL output saved to an Array and is processed further. For example, it's usually much easier to do multiple levels of grouping with LINQ to Objects instead of trying to optimize a T-SQL query to run in a reasonable amount of time. What would be the easiest way to take the end result of one of these "applications" and use that in a SQL Server 2008 stored proc? The idea is to use the data for a Reporting Services Report, rather than copying and pasting into Excel (manual labor). The reports need to be accessible on the report server (not using the Report Server control in an application). I could output CSV and read that somehow via command line exec, but that seems like a hack. Thanks for your help.

    Read the article

  • Create DataGridView columns from Table values

    - by fireBand
    Hi, I am using data grid view to display items as excel spread sheet in VB.NET. I got a table named CostTypes with column names [CostTypeID, CostType] and values [1,External] and [2,Internal] (These are constant but more values can be added to table). I want to create columns with names of the values[External , Internal] in DataGridView. If I use databiding directly I get columns [CostTypeID,CostType] which is not what I am looking for. If some one could explain how to create columns at runtime in datagridview or how to retrieve data from data base using LINQ so that [External , Internal] turn out to be columns that would be great. Thanks in advance.

    Read the article

  • How to export data which are mapped to enumerations

    - by Joshua
    I have a set of data which needs to be imported from a excel sheet, lets take the simplest example. Note: the data might eventually support uploading any locale. e.g. assuming one of the fields denoting a user is gender mapped to an enumeration and stored in the database as 0 for male and 1 for female. 0 and 1 being short values. If I have to import the values I cannot expect the user to punch in numbers (since they are not intuitive and is cumbersome when the enumerations are bigger), what would be the correct way to map to enumerations. Should we ask them to provide a string value in these cases (e.g. male or female) and provide the transformation to a enum in our code by wring a method public static Gender Gender.fromString(String value)

    Read the article

  • How do I add a VSTO project as a reference to a unit testing project?

    - by Mathias
    In order not to pollute my projects with unit tests, I like to create a separate project for my unit tests; I add a reference to the project under test in the unit tests project. However, this isn't working that well with my VSTO excel add-in projects: when I create a separate unit test project and go to Add Reference Projects, there is no project to pick. What I have done so far is Add Reference Browse, and pick the add-in dll from the debug folder. I have also run into issues from time to time with this, with the reference suddenly not working, requiring to remove/re-add the dll reference. Can anybody explain why a VSTO project doesn't show up as a regular project? And is there a better way to go about it than what I am doing presently?

    Read the article

  • Determine whether app is communicating with APNS sandbox or production environment

    - by goldierox
    I have push notifications set up in my app. I'm trying to determine whether the device token I've received from APNS in the application:didRegisterForRemoteNotificationsWithDeviceToken: method came from the sandbox or development environment. If I can distinguish which environment initialized the token, I'll be able to tell my server to which environment to send the push notification. I've tried using the DEBUG macro to determine this, but I've seen some strange behavior with this and don't trust it to be 100% correct. #ifdef DEBUG BOOL isProd = YES; #else BOOL isProd = NO; #endif Ideally, I'd be able to examine the aps-environment entitlement (value is Development or Production) in code, but I'm not sure if this is even possible. What's the proper way to determine whether your app is communicating with the APNS sandbox or production environments? I'm assuming that the server needs to know this in the first place. Please correct me if this is assumption is incorrect. Edited: Apple's documentation on Provider Communication with APNS details the difference between communicating with the sandbox and production. However, the documentation doesn't give information on how to be consistent with registering the token (from the iOS client app) and communicating with the server.

    Read the article

  • Pass data from workspace to a function

    - by Tim
    I created a GUI and used uiimport to import a dataset into matlab workspace, I would like to pass this imported data to another function in matlab...How do I pass this imported dataset into another function....I tried doing diz...but it couldnt pick diz....it doesnt pick the data on the matlab workspace....any ideas?? [file_input, pathname] = uigetfile( ... {'*.txt', 'Text (*.txt)'; ... '*.xls', 'Excel (*.xls)'; ... '*.*', 'All Files (*.*)'}, ... 'Select files'); uiimport(file_input); M = dlmread(file_input); X = freed(M);

    Read the article

  • How to make NSIS create a file in an %APPDATA% of another user?

    - by SCO
    I wrote an NSIS installer script for postgresql 9.1. The installer works properly, but after the reboot, the service is not started (right after the install, the database started properly though) I guess this is because the postgres service user has no pgpass.conf file in its %APPDATA%. As far as I understand my install script, the pgpass.conf file is added to the %APPDATA% of the user running the installer (an administrator account in my case). This will not help. I tried the following, to add the pgpass.conf to all users, bt I guess this adds it to a kind of wildcard, not to the %APPDATA% of each user : SetShellVarContext "all" SetOutPath "$APPDATA\postgresql" File config\pgpass.conf SetShellVarContext "current" SetOutPath "$APPDATA\postgresql" File config\pgpass.conf I couldn't find the macro name for c:/Users/postgres in the documentation. This could be a way to achieve it. But with WindowsXP, 7, I wish I need a portable way to address the /Users directory. I wish I could use something like SetShellVarContext "postgres", and then have NSIS write the pgpass.conf file in c:\Users\postgres\AppData\postgresql. Is there a way to do this ? Thnk you !

    Read the article

  • Problem with circular definition in Scheme

    - by user8472
    I am currently working through SICP using Guile as my primary language for the exercises. I have found a strange behavior while implementing the exercises in chapter 3.5. I have reproduced this behavior using Guile 1.4, Guile 1.8.6 and Guile 1.8.7 on a variety of platforms and am certain it is not specific to my setup. This code works fine (and computes e): (define y (integral (delay dy) 1 0.001)) (define dy (stream-map (lambda (x) x) y)) (stream-ref y 1000) The following code should give an identical result: (define (solve f y0 dt) (define y (integral (delay dy) y0 dt)) (define dy (stream-map f y)) y) (solve (lambda (x) x) 1 0.001) But it yields the error message: standard input:7:14: While evaluating arguments to stream-map in expression (stream-map f y): standard input:7:14: Unbound variable: y ABORT: (unbound-variable) So when embedded in a procedure definition, the (define y ...) does not work, whereas outside the procedure in the global environment at the REPL it works fine. What am I doing wrong here? I can post the auxiliary code (i.e., the definitions of integral, stream-map etc.) if necessary, too. With the exception of the system-dependent code for cons-stream, they are all in the book. My own implementation of cons-stream for Guile is as follows: (define-macro (cons-stream a b) `(cons ,a (delay ,b)))

    Read the article

  • Sorcery/Capybara: Cannon log in with :js => true

    - by PlankTon
    I've been using capybara for a while, but I'm new to sorcery. I have a very odd problem whereby if I run the specs without Capybara's :js = true functionality I can log in fine, but if I try to specify :js = true on a spec, username/password cannot be found. Here's the authentication macro: module AuthenticationMacros def sign_in user = FactoryGirl.create(:user) user.activate! visit new_sessions_path fill_in 'Email Address', :with => user.email fill_in 'Password', :with => 'foobar' click_button 'Sign In' user end end Which is called in specs like this: feature "project setup" do include AuthenticationMacros background do sign_in end scenario "creating a project" do "my spec here" end The above code works fine. However, IF I change the scenario spec from (in this case) scenario "adding questions to a project" do to scenario "adding questions to a project", :js => true do login fails with an 'incorrect username/password' combination. Literally the only change is that :js = true. I'm using the default capybara javascript driver. (Loads up Firefox) Any ideas what could be going on here? I'm completely stumped. I'm using Capybara 2.0.1, Sorcery 0.7.13. There is no javascript on the sign in page and save_and_open_page before clicking 'sign in' confirms that the correct details are entered into the username/password fields. Any suggestions really appreciated - I'm at a loss.

    Read the article

  • C#: Changing the order of columns when binding DataTable to a GridView

    - by Nir
    How is it possible to change the displayed order of columns from a DataTable? For example, dataTable "dt" contains two columns "a" and "b". I bind it to a GridView like this: gridView.DataSource = dt; gridView.DataBind(); But I'd like the GridView to display "b" first (leftmost). Important point: I'm using this to export to Excel and there's no actual output to screen, using: HtmlTextWriter htw = new HtmlTextWriter(sw); gridView.RenderControl(htw); Thanks!

    Read the article

  • Doing a ajax / json add to database, and have a "wait doing operation" icon

    - by Dejan.S
    Hi. I got a part on my page I want to improve. It's a file upload that users can add their contacts from files like excel, csv & outlook. I read the contacts and place them in the database, so what I would like to do is to have a regular icon that spins while that operation is doing that, how could I do that? Ajax? I don't want progress bar for the file upload but the operation for reading the file EDIT: I want to know how to make this work with the add to database using ajax. like should I use a updatepanel? Thanks

    Read the article

  • Is Google Mock a good mocking framework ?

    - by des4maisons
    I am pioneering unit testing efforts at my company, and need need to choose a mocking framework to use. I have never used a mocking framework before. We have already chosen Google Test, so using Google Mock would be nice. However, my initial impressions after looking at Google Mock's tutorial are: The need for re-declaring each method in the mocking class with a MOCK_METHODn macro seems unnecessary and seems to go against the DRY principle. Their matchers (eg, the '_' in EXPECT_CALL(turtle, Forward(_));) and the order of matching seem almost too powerful. Like, it would be easy to say something you don't mean, and miss bugs that way. I have high confidence in google's developers, and low confidence in my own ability to judge mocking frameworks, never having used them before. So my question is: Are these valid concerns? Or is there no better way to define a mock object, and are the matchers intuitive to use in practice? I would appreciate answers from anyone who has used Google Mock before, and comparisons to other C++ frameworks would be helpful.

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >