Search Results

Search found 4772 results on 191 pages for 'complex'.

Page 142/191 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Can you review my Perl rewrite of Cucumber?

    - by Evgeny
    There is a team working on acceptance testing X11 GUI application in our company, and they created a monstrous acceptance testing framework that drives the GUI as well as running scenarios. The framework is written using Perl 5, and scenario files look more like very complex Perl programs (thousands of lines long with procedural-programming style) than acceptance tests. I recently learned Ruby's Cucumber, and generally have been using Ruby for quite a lot of time. But unfortunately I can't just shove Ruby to replace Perl because the people who are writing all of this don't know Ruby and it's quite certain that they wont want "this" kind of interruption. So to bring Ruby's Cucumber a bit closer to their work, I rewrote it using Perl 5. Unfortunately I am really not a Perl programmer, and would love to get a code review and to hear suggestions from people who both know Perl and Cucumber. Hi Perl/Cucumber StackOverflow users - please help me create this "open source" attempt to re-create Cucumber for Perl! I would love to hear your comments and will accept any acceptable help. The minimal source code is here: http://github.com/kesor/p5-cucumber Thank you for your attention. For those not familiar with cucumber - please take just one small moment to take a look at this one small little page: http://wiki.github.com/aslakhellesoy/cucumber

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • JMS equivalent in .Net

    - by rauts
    Hi All, I am trying to make an common abstract interface over the messaging infrastructure in our company. The design goal is to 2 fold. 1 is to hide the complexity of programming from the developers (i know its not very complex but still simplify it further) and 2 is to make the developers independent of the vendor specific messaging infrastructure (i.e. it can be MQSeries or EMS or MSMQ). The very common option is using the WCF layer over the messaging infrastructure. Use the MQSeries Custom channel for WCF or use EMS custom channle for WCF. But both are ruled out due to lack of proper version of MQSeries and EMS. Can someone please suggest what are the possible solutions to this problem. One which i can think of the to have a custom wrapper like JMS. Has anyone ever tried something similar before. Any help would be fantastic. by the way, i am trying to create this wrapper in C# 3.5. Regards

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • dynamically create class in scala, should I use interpreter?

    - by Phil
    Hi, I want to create a class at run-time in Scala. For now, just consider a simple case where I want to make the equivalent of a java bean with some attributes, I only know these attributes at run time. How can I create the scala class? I am willing to create from scala source file if there is a way to compile it and load it at run time, I may want to as I sometimes have some complex function I want to add to the class. How can I do it? I worry that the scala interpreter which I read about is sandboxing the interpreted code that it loads so that it won't be available to the general application hosting the interpreter? If this is the case, then I wouldn't be able to use the dynamically loaded scala class. Anyway, the question is, how can I dynamically create a scala class at run time and use it in my application, best case is to load it from a scala source file at run time, something like interpreterSource("file.scala") and its loaded into my current runtime, second best case is some creation by calling methods ie. createClass(...) to create it at runtime. Thanks, Phil

    Read the article

  • How to add and remove nested model fields dynamically using Haml and Formtastic

    - by Brightbyte8
    We've all seen the brilliant complex forms railscast where Ryan Bates explains how to dynamically add or remove nested objects within the parent object form using Javascript. Has anyone got any ideas about how these methods need to be modified so as to work with Haml Formtastic? To add some context here's a simplified version of the problem I'm currently facing: # Teacher form (which has nested subject forms) [from my application] - semantic_form_for(@teacher) do |form| - form.inputs do = form.input :first_name = form.input :surname = form.input :city = render 'subject_fields', :form => form = link_to_add_fields "Add Subject", form, :subjects # Individual Subject form partial [from my application] - form.fields_for :subjects do |ff| #subject_field = ff.input :name = ff.input :exam = ff.input :level = ff.hidden_field :_destroy = link_to_remove_fields "Remove Subject", ff # Application Helper (straight from Railscasts) def link_to_remove_fields(name, f) f.hidden_field(:_destroy) + link_to_function(name, "remove_fields(this)") end def link_to_add_fields(name, f, association) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(association.to_s.singularize + "_fields", :f => builder) end link_to_function(name, h("add_fields(this, \"#{association}\", \"#{escape_javascript(fields)} \")")) end #Application.js (straight from Railscasts) function remove_fields(link) { $(link).previous("input[type=hidden]").value = "1"; $(link).up(".fields").hide(); } function add_fields(link, association, content) { var new_id = new Date().getTime(); var regexp = new RegExp("new_" + association, "g") $(link).up().insert({ before: content.replace(regexp, new_id) }); } The problem with implementation seems to be with the javascript methods - the DOM tree of a Formtastic form differs greatly from a regular rails form. I've seen this question asked online a few times but haven't come across an answer yet - now you know that help will be appreciated by more than just me! Jack

    Read the article

  • Qt Object Linker Problem " undefined reverence to vtable"

    - by Thomas
    This is my header: #ifndef BARELYSOCKET_H #define BARELYSOCKET_H #include <QObject> //! The First Draw of the BarelySocket! class BarelySocket: public QObject { Q_OBJECT public: BarelySocket(); public slots: void sendMessage(Message aMessage); signals: void reciveMessage(Message aMessage); private: // QVector<Message> reciveMessages; }; #endif // BARELYSOCKET_H This is my class: #include <QTGui> #include <QObject> #include "type.h" #include "client.h" #include "server.h" #include "barelysocket.h" BarelySocket::BarelySocket() { //this->reciveMessages.clear(); qDebug("BarelySocket::BarelySocket()"); } void BarelySocket::sendMessage(Message aMessage) { } void BarelySocket::reciveMessage(Message aMessage) { } I get the Linker Problem : undefined reference to 'vtable for barelySocket' This should mean, i have a virtual Function not implemented. But as you can see, there is non. I comment the vector cause that should solve the Problem, but i does not. The Message is a complex struct, but even converting it to int did not solve it. I searched Mr G but he could not help me. Thank you for your support, Thomas

    Read the article

  • WPF UI Automation - AutomationElement.FindFirst fails when there are lots of elements

    - by Orion Edwards
    We've got some automated UI tests for our WPF app (.NET 4); these test use the UI Automation API's. We call AutomationElement.FindFirst to find a target element, and then interact with it. Example (pseudocode): var nameEquals = new PropertyCondition(AutomationElement.NameProperty, "OurAppWindow"); var appWindow = DesktopWindow.FindFirst(TreeScope.Children, nameEquals); // this succeeds var idEquals = new PropertyCondition(AutomationElement.AutomationIdProperty, "ControlId"); var someItem = appWindow.FindFirst(TreeScope.Descendants, idEquals); // this suceeds sometimes, and fails sometimes! The problem is, the appWindow.FindFirst will sometimes fail and return null, even when the element is present. I've written a helper function which walks the UI automation tree manually and prints it out, and the element with the correct ID is present in all cases. It seems to be related to how many other items are also being displayed in the window. If there are no other items then it always succeeds, but when there are many other complex UI elements being displayed alongside it, then the find fails. I can't find any documented element limit mentioned for any of the automation API's - is there some way around this? I'm thinking I might have to write my own implemententation of FindFirst which does the tree walk manually itself... As far as I can tell this should work, because my tree-printer utility function does exactly that, and it's ok, but it seems like this would be unnecessary and slow :-( Any help would be greatly appreciated

    Read the article

  • Decisions in teaching someone else to program: language selection

    - by Dinah
    My friend would like for me to guide her into learning programming. She's already proven enormous aptitude for thinking like a programmer but is scared of the idea of programming since in her mind it's relegated to some magical realm accessible only to smart people and trained computer scientists (ironically, I am neither but that's beside the point). My main question is the age-old and irritating question: which language should I chose? I've limited it down to these: PHP: dead simple to start with and I remember enough of the language to answer all novice questions. However, I can think of a million reasons why I wouldn't recommend this as a first language. The most diplomatic of which is that there's no desktop app option to which I would feel comfortable subjecting a novice. Python: supposed to be wonderful for beginners and generally everything I've heard about it screams that this is the correct choice. That's the problem: everything I've heard about it. I don't know it yet and have a lot of projects going on right now so I don't feel like learning it yet -- but I'm going to be the tech-support when any little thing goes wrong. I know there are tons of online resources but in the frustration of the moment, it's always going to be just me. C#: this is the language I'm most comfortable with so I know I can be good tech support. I also love this language and its versatility and community. The big drawback here is that I remember when I first learned it after doing mainly PHP, Perl, and JavaScript and I found the experience overwhelming. You are simultaneously learning: programming concepts, C# syntax, strong typing, OOP, and a complex powerful IDE with a bazillion options and buttons all over it.

    Read the article

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • Why is Reporting Services report vastly slower than its query?

    - by Telos
    I have a query that takes roughly 2 minutes to run. It's not terribly complex in terms of parameters or anything, and the report itself doesn't do any truly extensive processing. Basically just spits the data straight out in a nice format. (Actually one of the reports doesn't format the data at all, just returns a flat table meant to be manipulated in excel.) It's not returning a massive set of data either. Yet the report takes upwards of 30 minutes to run. What could cause this? This is SSRS 2005 against a SQL 2005 database btw. EDIT: OK, I found that with the addition of WITH (NOLOCK) in the report it takes the same time as the query does through SSMS. Why would the query be handled differently if it's coming from reporting services (or visual studio on my local machine) than if coming from SSMS on my local machine? I saw the query running in Activity Monitor a couple times in SLEEP_WAIT mode, but not blocked by anything... EDIT2: The connection string is: Data Source=SERVERNAME;Initial Catalog=DBName

    Read the article

  • Model Binding, a simple, simple question

    - by Paul Hatcherian
    I have a struct which works much like the System.Nullable type: public struct SpecialProperty<T> { public static implicit operator T(SpecialProperty<T> value) { return value.Value; } public static implicit operator SpecialProperty<T>(T value) { return new TrackChanges<T> { Value = value }; } T internalValue; public T Value { get { return internalValue; } set { internalValue = value; } } public override bool Equals(object other) { return Value.Equals(other); } public override int GetHashCode() { return Value.GetHashCode(); } public override string ToString() { return Value.ToString(); } } I'm trying to use it with ASP.NET MVC binding. Using the default customer model binder the property will always yield null. I can fix this by adding ".Value" to the end of every form input name, but I just want it to bind to the new type directly using some sort of custom model binder, but all the solutions I've tried seemed needlessly complex. I feel like I should be able to extend the default binder and with a few lines of code redirect the property binding to the entire model using implicit conversion. I don't quite get the binding paradigm of the default binder, but it seems really stuck on this distinction between the model and model properties. What is the simplest method to do this? Thanks!

    Read the article

  • Remove duplicates from a list of nested dictionaries

    - by user2924306
    I'm writing my first python program to manage users in Atlassian On Demand using their RESTful API. I call the users/search?username= API to retrieve lists of users, which returns JSON. The results is a list of complex dictionary types that look something like this: [ { "self": "http://www.example.com/jira/rest/api/2/user?username=fred", "name": "fred", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=fred", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=fred", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=fred", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=fred" }, "displayName": "Fred F. User", "active": false }, { "self": "http://www.example.com/jira/rest/api/2/user?username=andrew", "name": "andrew", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=andrew", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=andrew", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=andrew", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=andrew" }, "displayName": "Andrew Anderson", "active": false } ] I'm calling this multiple times and thus getting duplicate people in my results. I have been searching and reading but cannot figure out how to deduplicate this list. I figured out how to sort this list using a lambda function. I realize I could sort the list, then iterate and delete duplicates. I'm thinking there must be a more elegant solution. Thank you!

    Read the article

  • finding "line-breaks" in textarea that is word-wrapping ARABIC text

    - by Irfan jamal
    I have a string of text that I display in a textarea (right-to-left orientation). The user can resize the textarea dynamically (I use jquery for this) and the text will wrap as necessary. When the user hits submit, I will take that text and create an image using PHP, BUT before submitting I would like to know where the "line-breaks" or rather "word-wraps" occur. Everywhere I have looked so far only shows me how to process line-breaks on the php side. I want to make it clear that there ARE NO LINE-BREAKS. What I have is one LONG string that will be word-wrapped in different ways based on the width of the textarea set by the user. I can't use "columns" or any other standard width representation because I have a very complex arabic font that is actually composed of glyphs (characters) of numerous different widths. If anyone knows of a way of accessing where the word wraps occur (either in a textarea or a div if need-be), I'd really like to know. My only other solution is to actually store (in my DB) the width of every single character (somewhat tedious since there are over 200 characters in 600 different fonts, for a total of...some huge number). My hopes aren't high, but I thought I would ask. Thanks i. jamal

    Read the article

  • sproutcore vs javascriptMVC for web app development

    - by swami
    Hi, I want to use a javascript framework with MVC for a complex web application (which will be one of a set of related apps and pages) for an intranet in a digital archives. I have been looking at SproutCore and JavascriptMVC. I want to choose one framework and stick with it. Does anybody know what the distinguishing features are when comparing these two? I want something that is simple, straightforward that I can customize/hack easily, and that doesn't get in my way too much, but that at the same time gives me a basis for keeping my code nicely organized, and event-driven. I also plan on using jquery substantially. I know sproutcore is backed by Apple, and looks like it is getting more popular by the day, and it has a nice green website :), whereas JavascriptMVC looks less professional, with less of a following and less momentum behind it. I've done the tutorials for both and I was impressed by SproutCore more (in the JMVC tutorial you don't really do anything substantial) - but somewhere in the back of my mind I feel that JMVC might just be better because it doesn't try and do too much - it just gives you MVC functionality based on a couple of jquery plugins, and you can use jquery for everything else, so its flexible. Whereas SproutCore seems to have more of its own API etc... which is also nice in a way... but then you're kind of stuck within that.... hmmm I'm confused :). Any thoughts would be much appreciated.

    Read the article

  • Using ComplexType with ToList causes InvalidOperationException

    - by Chris Paynter
    I have this model namespace ProjectTimer.Models { public class TimerContext : DbContext { public TimerContext() : base("DefaultConnection") { } public DbSet<Project> Projects { get; set; } public DbSet<ProjectTimeSpan> TimeSpans { get; set; } } public class DomainBase { [Key] public int Id { get; set; } } public class Project : DomainBase { public UserProfile User { get; set; } public string Name { get; set; } public string Description { get; set; } public IList<ProjectTimeSpan> TimeSpans { get; set; } } [ComplexType] public class ProjectTimeSpan { public DateTime TimeStart { get; set; } public DateTime TimeEnd { get; set; } public bool Active { get; set; } } } When I try to use this action I get the exception The type 'ProjectTimer.Models.ProjectTimeSpan' has already been configured as an entity type. It cannot be reconfigured as a complex type. public ActionResult Index() { using (var db = new TimerContext()) { return View(db.Projects.ToList); } } The view is using the model @model IList<ProjectTimer.Models.Project> Can any one shine some light as to why this would be happening?

    Read the article

  • conditionals for C++ using MSBuild/vsbuild?

    - by redtuna
    I have a C++ project in Visual Studio 2008 and I'd like to be able to compile several versions from the command line, defining conditional variables (aka #define). If it were just a single file to compile I'd use something like cl /D, but this is complex enough that I would like to be able to use VS's other features like the build order etc. I've seen a similar question asked in stackoverflow and the answer was to use /p:DefineConstants="var1;var2". This doesn't seem to work with C++ though. The other problem with that answer is that it replaces the conditional variables, instead of adding to them. The vcproj files for C++ look quite different. If msbuild (or vsbuild) had a way to change Configurations/Tool[name="VCCLCompilerTool"] we'd be golden. But I haven't found such an option. The vcproj files are under source control so I'd rather not have a script mess with them. I've considered doubling the number of configurations (one with the #define, one without). That'd be annoying, and I'm especially unhappy with having to modify these configurations in tandem every time I need to modify anything there. A previous similar question found no solution. I'm hoping that has changed since? How would you go about building those variants (with and without define) from the command line? Thanks!

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • error with "pmem.c" compiling linux source code for android

    - by Preetam
    I am compiling linux source code for android emulator. When i execute make command(for building and cross-compiling the linux source) i get the following error "pmem.c" file. root@ubuntu:~/common# make CHK include/linux/version.h CHK include/linux/utsrelease.h SYMLINK include/asm - include/asm-x86 CALL scripts/checksyscalls.sh CHK include/linux/compile.h CC drivers/misc/pmem.o drivers/misc/pmem.c:441: error: conflicting types for ‘phys_mem_access_prot’ /home/preetam/common/arch/x86/include/asm/pgtable.h:383: note: previous declaration of ‘phys_mem_access_prot’ was here drivers/misc/pmem.c: In function ‘flush_pmem_file’: drivers/misc/pmem.c:805: error: implicit declaration of function ‘dmac_flush_range’ drivers/misc/pmem.c: In function ‘pmem_setup’: drivers/misc/pmem.c:1265: error: implicit declaration of function ‘ioremap_cached’ drivers/misc/pmem.c:1266: warning: assignment makes pointer from integer without a cast make[2]: * [drivers/misc/pmem.o] Error 1 make[1]: [drivers/misc] Error 2 make: ** [drivers] Error 2 root@ubuntu:~/common# how to resolve this error. It seems that there may some problems in the "pmem.c" file and i'll have to choose different git repository. but that would be a very complex thing, as now i have already done most of the things till here. I might have to see correct version of this file. please someone tell what should i do? how to solve this errors. please help..thankyou!

    Read the article

  • Missing ideas in programming language design

    - by meyka
    I wanted to try something new and so I designed some programming languages and wrote interpreters for them: A rather low-level, not very expressive language. (I didn't want to parse complex expressions right at the beginning) It featured: Variables (yay) Subroutines, with a call stack Basic arithmetic functions, basic string manipulation, ... Code in the language looks like this: set i 0 inc i print i Very, very basic you see. A more high-level language I decided to make it structured and so it featured things like if-else, while, functions, and so on. The stuff most programming languages have. Ended up like a unworthy Python clone, I hated that. A code-golf language Which ended up similar to J, golfcode, APL, etc. Nothing special As you can see: I don't lack the skills but the ideas. I can't figure out anything new, not even bad, unneccessary things, for my languages. - Do you know of some weird things I could implement in my languages, which don't try to make programming harder (like most esoteric languages) but funnier or more different from other languages? It can't be possible that every weird thing has been tried out so far, or?

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Is there any danger in calling free() or delete instead of delete[]? [closed]

    - by Matt Joiner
    Possible Duplicate: ( POD )freeing memory : is delete[] equal to delete ? Does delete deallocate the elements beyond the first in an array? char *s = new char[n]; delete s; Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array? For more complex types, would delete call the destructor of objects beyond the first one? Object *p = new Object[n]; delete p; How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more. For primitive types, such as char, int, is there any difference between: int *p = new int[n]; delete p; delete[] p; free p; Except for the routes taken by the respective calls through the delete-free deallocation machinery?

    Read the article

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • Proper use of HTTP status codes in a "validation" server

    - by Romulo A. Ceccon
    Among the data my application sends to a third-party SOA server are complex XMLs. The server owner does provide the XML schemas (.xsd) and, since the server rejects invalid XMLs with a meaningless message, I need to validate them locally before sending. I could use a stand-alone XML schema validator but they are slow, mainly because of the time required to parse the schema files. So I wrote my own schema validator (in Java, if that matters) in the form of an HTTP Server which caches the already parsed schemas. The problem is: many things can go wrong in the course of the validation process. Other than unexpected exceptions and successful validation: the server may not find the schema file specified the file specified may not be a valid schema file the XML is invalid against the schema file Since it's an HTTP Server I'd like to provide the client with meaningful status codes. Should the server answer with a 400 error (Bad request) for all the above cases? Or they have nothing to do with HTTP and it should answer 200 with a message in the body? Any other suggestion? Update: the main application is written in Ruby, which doesn't have a good xml schema validation library, so a separate validation server is not over-engineering.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >