Search Results

Search found 30784 results on 1232 pages for 'adding software'.

Page 90/1232 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • Can't play Steel Storm, Burning Retribution

    - by Goytor
    I've bougth Steel Storm, Burning Retribution in the Software Center, and every time I run it shows the following message: You have reached this menu due to missing or unlocable content/data You may consider adding -base dir /path/to/game to your launch commandline I've gone to main menu in the preferences tab and changed the launcher to no avail. I've tried running it from console, with /opt/steelstorm-episode2/steelstorm, I got: Game is Steel-Storm using base gamedir gamedata Steel-Storm Linux 01:07:07 Jun 11 2011 - release Playing shareware version. Skeletal animation uses SSE code path DPSOFTRAST available (SSE2 instructions detected) Failed to init SDL joystick subsystem: couldn't exec quake.rc couldn't exec default.cfg execing config.cfg couldn't exec autoexec.cfg Client using an automatically assigned port Client opened a socket on address 0.0.0.0:0 Client opened a socket on address [0:0:0:0:0:0:0:0]:0 Linked against SDL version 1.2.12 Using SDL library version 1.2.14 GL_VENDOR: NVIDIA Corporation GL_RENDERER: GeForce 6150SE nForce 430/PCI/SSE2/3DNOW! GL_VERSION: 2.1.2 NVIDIA 270.41.06 vid.support.arb_multisample 1 vid.mode.samples 0 vid.support.gl20shaders 1 Video Mode: fullscreen 640x480x32x0.00hz S_Startup: initializing sound output format: 48000Hz, 16 bit, 2 channels... Wanted audio Specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 2048 Obtained audio specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 1024 Sound format: 48000Hz, 2 channels, 16 bits per sample CDAudio_Init: No CD in player. Can't get initial CD volume CD Audio Initialized If I try -base /opt/steelstorm-episode2/steelstorm says "command not found".

    Read the article

  • How are requirements determined in open source software projects?

    - by Aron Lindberg
    In corporate in-house software development it is common for requirements to be determined through a formal process resulting in the creation of a number of requirements documents. In open source software development, this often seems to be absent. Hence, my question is: how are requirements determined in open source software projects? By "determining requirements" I simply mean "figuring out what features etc. should be developed as part of a specific software".

    Read the article

  • What is a good one-stop-shop for understanding software licensing information?

    - by Macy Abbey
    I've learned a fair amount about the various different software licensing models and what those models mean for my own software project. However, I'd like to make sure I understand as many of them as possible for making decisions on how to license my own software and in what scenarios I can safely use software under a licensing model. Do you have a good recommendation for a book/site etc.. that has this information in one location?

    Read the article

  • Adding figures using contextual menu - Eclipse GEF

    - by darkie15
    All, I am creating a palette less eclipse plugin where am adding figures to the custom editor through the contextual menu, but am not finding a way to do it. Can anyone please guide me as to how to go about adding figures to editor dynamically through context menu i.e. adding actions/commands. Since Eclipse GEF plugin development finds so less examples to look at, I am adding my solution so others find it useful. This code helps to render a node to the editor. Source code for Action class to render figures to the editor: public class AddNodeAction extends EditorPartAction { public static final String ADD_NODE = "ADDNODE"; public AddNodeAction(IEditorPart editor) { super(editor); setText("Add a Node"); setId(ADD_NODE); // Important to set ID } public void run() { <ParentModelClass> parent= (<ParentModelClass>)getEditorPart().getAdapter(<ParentModelClass>.class); if (parent== null) return; CommandStack command = (CommandStack)getEditorPart().getAdapter(CommandStack.class); if (command != null) { CompoundCommand totalCmd = new CompoundCommand(); <ChildModelToRenderFigureCommand>cmd = new <ChildModelToRenderFigureCommand>(parent); cmd.setParent(parent); <ChildModelClass> newNode = new <ChildModelClass>(); cmd.setNode(newNode); cmd.setLocation(getLocation()); // Any location you wish to set to totalCmd.add(cmd); command.execute(totalCmd); } } @Override protected boolean calculateEnabled() { return true; } }

    Read the article

  • eBay Leads Mobile Commerce

    - by David Dorf
    For the first time, more smartphones where shipped than PCs. This important milestone helps reinforce that retailers need a strong mobile commerce strategy. IDC reported that for the 4th quarter of 2010, manufacturers shipped 100.9 million devices versus 92.1 million PCs shipped. One early adopter for the retail industry is eBay, the popular online auction and shopping site. In July 2008 they released their first mobile app and have increased investments ever since. In 2002 they bought PayPal for use with its online channel, but its becoming a force in the mobile world as well. In June 2010 they acquired RedLaser, the popular barcode scanning mobile app. Both pieces of technology enhance the mobile experience, and are available to other retailers as well. More recently, in December 2010 they acquired Critical Path Software, the developer of their eBay, StubHub, and Shopping.com mobile applications. Taking their mobile development in-house was a clear signal that mobile commerce is important to their strategy. Pop on over the eBay Inc's mobile commerce stats page to see just how well they are doing. You can use the animated map to see where people are using the app on any given day, and you can compare sales of the different categories. eBay's hottest category is Cars & Trucks, garnering 16.5% of the total $2B (yes, billion) in mobile sales in 2010. To understand why that category is so large, let's look at the top 10 most expensive cars sold on eBay mobile in 2010: $240,001 Mercedes-Benz: SLR McLaren $209,888 Lamborghini: Gallardo $208,500 Ferrari: 430 $199,900 Lamborghini: Gallardo $189,000 Lamborghini: Murcielago $185,000 Ferrari: 430 $175,000 Porsche: 911 $170,000 Ferrari: 550 $160,000 Bentley: Continental, GT $159,900 Lamborghini: Gallardo eBay claims they sell 3-4 Ferraris on their mobile app each month. Yes, mobile commerce is not limited to small items. While I would wait to get home and fire up the PC, the current generation that has grown up with mobile phones has no issue satisfying their impulses. Dave Sikora of Digby told me he's seen people buy furniture sets, mattresses, and diamonds via their mobile phones. I guess mobile commerce is rapidly becoming the norm.

    Read the article

  • "Oracle", "Sybase", "SQL Server" vs just "SQL/JDBC" in the CV

    - by bobah
    How would you define a testable measure of the expertise that, if you're honest with yourself, lets you write in your CV words "Oracle", "Sybase", or "SQL Server" and not just "Relational Databases, SQL, JDBC" in your software developer's CV? What every XXX-developer (XXX - a vendor name) should know? The question is similar to http://stackoverflow.com/questions/2119859/questions-every-good-database-sql-developer-should-be-able-to-answer but is vendor-specific. Below is a start of the list as an example, demonstrate what kind of answers I am hoping to get. If you are expert in X then you know that Y (X - Y below): Sybase/SQL Server - they are very similar, Sybase is much more expensive Sybase/SQL Server - for Java you can use either native Sybase/JSQLDB driver or jTDS that is using TDS protocol and can connect to SQL Server as well, TDS traffic can be dumped and analyzed with hexdump command Sybase/SQL Server - for C++ you can use FreeTDS to connect to any, for Perl - same Sybase/SQL Server - a query can return multiple result sets and return codes, all need to be processes otherwise errors can happen Sybase/SQL Server - sp_help, sp_helptext Sybase/SQL Server - your tables/views/procedures are under DBName/dbo/... Sybase - for C++ on Linux you can use Sybase client API to connect (at least until recently) SQL Server - JDBC driver has a configurable transparent failover capability Oracle - for C++ Linux one can use OTLv4 that is a very powerful yet lightweight wrapper around Oracle client API Oracle compilation (contributors: ammoQ) PLSQL Java Stored Procedures '' is null Hierarchical Query Analytic Functions Oracle Text

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • First time application where to start?

    - by Nazariy
    After many years of searches and copy pasting, I'm still looking for simple solution that can transliterate text input on the fly from one key set to another. There are quite few online services that provide this feature but it still quite annoying to go online all the time. Unfortunately there is not that many applications left which are capable of doing so, and none of them supported by this day. I decided to make my own and at same time to learn something new for my self. The idea is quite simple: application should sit in system tray and wait until input language get changed, for example to Russian. If Russian language is activated, application should start to listen for user key strokes combination and replace them based on custom dictionary for example R = ?, SH = ? etc. I should be able to bind application to any installed language (Russian, Ukrainian, Bulgarian, Belarusian etc.) and customise dictionary for any of them. So my question is: Which language should I chose for this task C++, C# or might be something hardcore like Assembler, as application should work natively with Windows XP/Vista/7 or possibly Mac. (cross platform support is good but my main target is Windows) Due to nature of application behaviour how can I tell anti-virus software that it is not a "Key Logger" and basically not a virus? Where should I start and what should I be aware of? P.S. My current programming knowledge is quite basic, PHP and JavaScript with Object Oriented approach.

    Read the article

  • sudo apt-get update does not work for 12.10

    - by Brian Hawi
    hey i recently installed ubuntu 12.10 but the software center does not work i tried the sudo apt-get update because that worked when i was using ubuntu 11.04.... these are the errors hawi@hawi-HP-G62-Notebook-PC:~$ sudo apt-get update [sudo] password for hawi: Err http:ke.archive.ubuntu.com quantal InRelease Err http:ke.archive.ubuntu.com quantal-updates InRelease Err http:ke.archive.ubuntu.com quantal-backports InRelease Err http:ke.archive.ubuntu.com quantal Release.gpg Unable to connect to ke.archive.ubuntu.com:http: Err http:ke.archive.ubuntu.com quantal-updates Release.gpg Unable to connect to ke.archive.ubuntu.com:http: Err http:ke.archive.ubuntu.com quantal-backports Release.gpg Unable to connect to ke.archive.ubuntu.com:http: Err http:security.ubuntu.com quantal-security InRelease Err http:security.ubuntu.com quantal-security Release.gpg Unable to connect to security.ubuntu.com:http: [IP: 91.189.92.190 80] Err http:extras.ubuntu.com quantal InRelease Err http:extras.ubuntu.com quantal Release.gpg Unable to connect to extras.ubuntu.com:http: Reading package lists... Done W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal/InRelease W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-updates/InRelease W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-backports/InRelease W: Failed to fetch http:security.ubuntu.com/ubuntu/dists/quantal-security/InRelease W: Failed to fetch http:extras.ubuntu.com/ubuntu/dists/quantal/InRelease W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to ke.archive.ubuntu.com:http: W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-updates/Release.gpg Unable to connect to ke.archive.ubuntu.com:http: W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-backports/Release.gpg Unable to connect to ke.archive.ubuntu.com:http: W: Failed to fetch http:security.ubuntu.com/ubuntu/dists/quantal-security/Release.gpg Unable to connect to security.ubuntu.com:http: [IP: 91.189.92.190 80] W: Failed to fetch http:extras.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to extras.ubuntu.com:http: W: Some index files failed to download. They have been ignored, or old ones used instead. (note i have removed the // after http because the site does not allow me to post more than two links) what could be the issue?

    Read the article

  • Izenda Reports 6.3 Top 10 Features

    - by gt0084e1
    Izenda 6.3 Top 10 New Features and Capabilities 1. Izenda Maps Add-On The Izenda Maps add-on allows rapid visualization of geographic or geo-spacial data.  It is fully integrated with the the rest of Izenda report package and adds a Maps tab which allows users to add interactive maps to their reports. Contact your representative or [email protected] for limited time discounts. Izenda Maps even has rich drill-down capabilities that allow you to dive deeper with a simple hover (also requires dashboards). 2. Streamlined Pie Charts with "Other" Slices The advanced properties of the Pie Chart now allows you to combine the smaller slices into a single "Other" slice. This reduces the visual complexity without throwing off the scale of the chart. Compare the difference below. 3. Combined Bar + Line Charts The Bar chart now allows dual visualization of multiple metrics simultaneously by adding a line for secondary data. Enabled via AdHocSettings.AllowLineOnBar = true; 4. Stacked Bar Charts The stacked bar chart lets you see a breakdown of a measure based on categorical data.  It is enabled with the following code. AdHocSettings.AllowStackedBarChart = true; 5. Self-Joining Data Sources The self-join features allows for parent-child relationships to be accessed from the Data Sources tab. The same table can be used as a secondary child table within the Report Designer. 6. Report Design From Dashboard View Dashboards now sport both view and design icons to allow quick access to both. 7. Field Arithmetic on Dates Differences between dates can now be used as measures with the arithmetic feature. 8. Simplified Multi-Tenancy Integrating with multi-tenant systems is now easier than ever. The following APIs have been added to facilitate common scenarios. AdHocSettings.CurrentUserTenantId = value; AdHocSettings.SchedulerTenantID = value; AdHocSettings.SchedulerTenantField = "AccountID"; 9. Support For SQL 2008 R2 and SQL Azure Izenda now supports the latest version of Microsoft's database as well as the SQL Azure service. 10. Enhanced Performance and Compatibility for Stored Procedures Izenda now supports more stored procedures than ever and runs them faster too.

    Read the article

  • Who is code wanderer?

    - by DigiMortal
    In every area of life there are people with some bad habits or misbehaviors that affect the work process. Software development is also not free of this kind of people. Today I will introduce you code wanderer. Who is code wanderer? Code wandering is more like bad habit than serious diagnose. Code wanderers tend to review and “fix” source code in files written by others. When code wanderer has some free moments he starts to open the code files he or she has never seen before and starts making little fixes to these files. Why is code wanderer dangerous? These fixes seem correct and are usually first choice to do when considering nice code. But as changes are made by coder who has no idea about the code he or she “fixes” then “fixing” usually ends up with messing up working code written by others. Often these “fixes” are not found immediately because they doesn’t introduce errors detected by compilers. So these “fixes” find easily way to production environments because there is also very good chance that “fixed” code goes through all tests without any problems. How to stop code wanderer? The first thing is to talk with person and explain him or her why those changes are dangerous. It is also good to establish rules that state clearly why, when and how can somebody change the code written by other people. If this does not work it is possible to isolate this person so he or she can post his or her changes to code repository as patches and somebody reviews those changes before applying them.

    Read the article

  • Can I re-license Academic Free License code under 2-Clause BSD / ITC?

    - by Stefano Palazzo
    I want to fork a piece of code licensed under the Academic Free License. For the project, it would be preferable to re-license it under the ISC License or the 2-Clause BSD license, which are equivalent. I understand that the AFL grants me things such as limitation of liability, but licensing consistency is much more important to the project, especially since we're talking about just 800 lines of code, a quarter of which I've modified in some way. And it's very important for me to give these changes back to the community, given the fact that this is software relevant to security - I need the public scrutiny that I'll get by creating a public fork. In short: At the top of the file I want to say this, or something like it: # Licensed under the Academic Free License, version 3 # Copyright (C) 2009 Original Author # Licensed under the ISC License # Copyright (C) 2012 Stefano Palazzo # Copyright (C) 2012 Company Am I allowed to do this? My research so far indicates that it's not clear whether the AFL is GPL-Compatible, and I can't really understand any of the stuff concerning re-licensing to other permissive licenses. As a stop gap, I would also be okay with re-licensing under the GPL, however: I can find no consensus (though I can find disagreement) on whether this is allowed at all, and I don't want to risk it, of course. Wikipedia: ISC License Wikipedia: Academic Free License

    Read the article

  • How does a segment based rendering engine work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • Error installing avogadro with CMake 'lconvert: could not exec No such file or directory'

    - by Orr22
    I'm brand new in ubuntu. I'm trying to install Avogadro. The program need the following packages, which I could install: CMake - OpenBabel 2.3.2 - Qt4 - Git - Eigen2. Here it is the recepy to install the : cd $HOME/src git clone git://github.com/cryos/avogadro.git mkdir -p $HOME/build/avogadro cd $HOME/build/avogadro cmake $HOME/src/avogadro make -j2 sudo make install It was unable to compile, but when I skipped the 'git clone' step it seemed to work just fine. After several stops during the CMake compiling process (software actualizations, get Doxygen, get flex, get bison) I was able to compile. But when I introduce the 'make -j2' command the installation stops as follows: Orr22@javi-87:~/build_avogadro$ make -j2 [ 0%] Built target elementcolor [ 0%] Built target bsdyengine [ 2%] Built target spglib [ 3%] Built target navigatetool [ 4%] Built target tubegen [ 4%] Generating libavogadro_hu.qm [ 6%] Built target OpenQube [ 6%] Generating moc_animation.cxx lconvert: could not exec '/usr/lib/i386-linux-gnu/qt5/bin/lconvert': No such file or directory make[2]: *** [libavogadro/src/libavogadro_hu.qm] Error 1 make[2]: *** Se espera a que terminen otras tareas.... make[1]: *** [libavogadro/src/CMakeFiles/avogadro.dir/all] Error 2 make: *** [all] Error 2 Any suggestions to proceed? Thanks in advance, Orr22

    Read the article

  • .NET development on Macs

    - by Jeff
    I posted the “exciting” conclusion of my laptop trade-ins and issues on my personal blog. The links, in chronological order, are posted below. While those posts have all of the details about performance and software used, I wanted to comment on why I like using Macs in the first place. It started in 2006 when Apple released the first Intel-based Mac. As someone with a professional video past, I had been using Macs on and off since college (1995 graduate), so I was never terribly religious about any particular platform. I’m still not, but until recently, it was staggering how crappy PC’s were. They were all plastic, disposable, commodity crap. I could never justify buying a PowerBook because I was a Microsoft stack guy. When Apple went Intel, they removed that barrier. They also didn’t screw around with selling to the low end (though the plastic MacBooks bordered on that), so even the base machines were pretty well equipped. Every Mac I’ve had, I’ve used for three years. Other than that first one, I’ve also sold each one, for quite a bit of money. Things have changed quite a bit, mostly within the last year. I’m actually relieved, because Apple needs competition at the high end. Other manufacturers are finally understanding the importance of industrial design. For me, I’ll stick with Macs for now, because I’m invested in OS X apps like Aperture and the Mac versions of Adobe products. As a Microsoft developer, it doesn’t even matter though… with Parallels, I Cmd-Tab and I’m in Windows. So after three and a half years with a wonderful 17” MBP and upgraded SSD, it was time to get something lighter and smaller (traveling light is critical with a toddler), and I eventually ended up with a 13” MacBook Air, with the i7 and 8 gig upgrades, and I love it. At home I “dock” it to a Thunderbolt Display. A new laptop .NET development on a Retina MacBook Pro with Windows 8 Returning my MacBook Pro with Retina display .NET development on a MacBook Air with Windows 8

    Read the article

  • GNU/Linux interactive table content GUI editor?

    - by sdaau
    I often find myself in the need to gather data (say from the internet), into a table, for comparison reasons. I usually need the final table output in HTML or MediaWiki mostly, but often times also Latex. My biggest problem is that I often forget the correct table syntax for these markup languages, as well as what needs to be properly escaped in the inline data, for the table to render correctly. So, I often wish there was a GUI application, which provides a tabular framework - which I could stick "Always on Top" as a desktop window, and I could paste content into specific cells - before finally exporting the table as a code in the correct language. One application that partially allows this is Open/LibreOffice calc: The good thing here is that: I can drag and drop browser content into a specifically targeted table cell (here B2) "Rich" text / HTML code gets pasted For long content, the cell (column) width stays put as it originally was The bad thing is, that: when the cell height (due to content size) becomes larger than the calc window, it becomes nearly impossible to scroll calc contents up and down (at least with the mousewheel), as the view gets reset to top-right corner of the selected cell calc shows an "endless"/unlimited field of cells, so not exactly a "table" - which I find visually very confusing (and cognitively taxing) Can only export table to HTML What I would need is an application that: Allows for a limited size table, but with quick adding of rows and columns (e.g. via corresponding + buttons) Allows for quick setup of row and column height and width (as well as table size) Stays put at those sizes, regardless of size of content pasted in; if cell content overflows, cell scrollbars are shown (cell content could be possibly re-edited in a separate/new window); if table overflows over window size, window scrollbars are shown Exports table in multiple formats (I'd need both HTML and mediawiki), properly escaping cell content for each (possibility to strip HTML tags from content pasted in cells, to get plain text, is a plus) Targeting a specific cell in the table for the content paste operation is a must - it doesn't have to be drag'n'drop though, a right click over a cell with "Paste content" is enough. I'd also want the ability to click in a specific cell and type in (plain text) content immediately. So, my question is: is there an application out there that already does something like this? The reason I'm asking is that - as the screenshots show - for instance Libre/OpenOffice allows it, but only somewhat (as using it for that purpose is tedious). I know there exist some GUI editors for Linux (both for UI like guile or HTML like amaya); but I don't know them enough to pinpoint if any of them would offer this kind of functionality (and at least in my searches, that kind of functionality, if present in diverse software, seems not to be advertised). Note I'm not interested in styling an HTML table, which is why I haven't used "table designer" in the title, but "table editor" (in lack of better terms) - I'm interested in (quickly) adjusting row/column size of the table, and populating it with pasted data (which is possibly HTML) in a GUI; and finally exporting such a table as self-contained HTML (or other) code.

    Read the article

  • Who is code wanderer?

    - by DigiMortal
    In every area of life there are people with some bad habits or misbehaviors that affect the work process. Software development is also not free of this kind of people. Today I will introduce you code wanderer. Who is code wanderer? Code wandering is more like bad habit than serious diagnose. Code wanderers tend to review and “fix” source code in files written by others. When code wanderer has some free moments he starts to open the code files he or she has never seen before and starts making little fixes to these files. Why is code wanderer dangerous? These fixes seem correct and are usually first choice to do when considering nice code. But as changes are made by coder who has no idea about the code he or she “fixes” then “fixing” usually ends up with messing up working code written by others. Often these “fixes” are not found immediately because they doesn’t introduce errors detected by compilers. So these “fixes” find easily way to production environments because there is also very good chance that “fixed” code goes through all tests without any problems. How to stop code wanderer? The first thing is to talk with person and explain him or her why those changes are dangerous. It is also good to establish rules that state clearly why, when and how can somebody change the code written by other people. If this does not work it is possible to isolate this person so he or she can post his or her changes to code repository as patches and somebody reviews those changes before applying them.

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • Is this project Structure Valid?

    - by rafuru
    I have a dilemma: In the university we learn to create modular software (on java), but this modularity is explained using a single project with packages (a package for business, another one for DAOS and another one for the model, oh and a last package for frontend). But in my work we use the next structure: I will try to explain: First we create a java library project where the model (entities classes) are created in a package. Next we create an EJB named DAOS and using the netbeans wizard we store the DAOS interfaces in the library project in another package , these interfaces are implemented in the DAOS bean. So the next part is the business logic, we create a business EJB for each group of functions , again using the wizard we store the interface in the java library project in another package then is implemented on the business bean. The final part (for the backend) is a bean that I have suggested: a Facade bean who will gather every method of the business beans in a single bean and this has an interface too that is created in our library project and implemented in the bean. So the next part is call the facade module on the web project. But I don't know how valid or viable is this, maybe I'm doing everything wrong and I don't even know! so I want to ask your opinion about this.

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • What have you learned from the bugs you helped discover and fix?

    - by Ethel Evans
    I liked the core of this question, and wanted to re-ask it in a way that made it less about 'fun' and more about 'What do these past mistakes tell us about how we can write and test software better?' As an SDET, I'm always looking for anecdotes about new and interesting ways that programs can fail. I've learned a lot from these tales in the past, and would like to get that from the intelligent people in this community as well. I'd be interested in hearing what the issue was, how it was caught, if you think there was anything that could have reasonably done to catch it earlier or to avoid the same issue on later projects, and any other interesting lessons you took away from this bug. Please only write about bugs you personally were involved with, ideally on a project you worked on (e.g., no "10 years before I was born, this happened and it was FUNNY!" answers). Please vote up answers that are thought-provoking or could change how you develop or test in some way, so this isn't just 'social fun'. Try to avoid voting up something just because it was funny.

    Read the article

  • As my first professional position should I take it at a start-up or a better known company? [closed]

    - by Carl Carlson
    I am a couple of months removed from graduating with a CS degree and my gpa wasn't very high. But I do have aspirations of becoming a good software developer. Nevertheless I got two job offers recently. One is with a small start-up and the other is with a military contractor. The military contractor asked for my gpa and I gave it to them. The military contracting position is in developing GIS related applications which I was familiar with in an internship. After receiving an offer from the military contractor, I received an offer from the start-up after the start-up asked me how much the offer was from the military contractor. So the pay is even. The start-up would require I be immediately thrust into it with only two other people in the start-up currently and I would have to learn everything on my own. The military contractor has teams and people who know what their doing and would be able to offer me guidance. Seeing as how I have been a couple of months removed from school and need something of a refresher is it better than I just dive into the start-up and diversify what I've learned or be specialized on a particular track? Some more facts about the start-up: It deals with military contracts as well and is in Phase 2 of contracts. It will require I learn a diverse amount of technologies including cyber security, android development, python, javascript, etc. The military contractor will have me learn more C#, refine my Java, do javascript, and GIS related technologies. I might as well come out and say the military contractor is Northrop Grumman and more or less offered me less money than the projected starting salary from online salary calculators. But there is the possibility of bonuses, while the start-up doesn't include the possibility of bonuses. I think benefits for both are relatively the same.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >