Search Results

Search found 836 results on 34 pages for 'flexibility'.

Page 2/34 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Using IP Restrictions with URL Rewrite-Week 25

    - by OWScott
    URL Rewrite offers tremendous flexibility for customizing rules to your environment. One area of functionality that is often desired for URL Rewrite is to allow a large list of approved or denied IP addresses and subnet ranges. IIS’s original IP Restrictions is helpful for fully blocking an IP address, but it doesn’t offer the flexibility that URL Rewrite does. An example where URL Rewrite is helpful is where you want to allow only authorized IPs to access staging.yoursite.com, but where staging.yoursite.com is part of the same site as www.yoursite.com. This requires conditional logic for the user’s IP. This lesson covers this unique situation while also introducing Rewrite Maps, server variables, and pairing rules to add more flexibility. This is week 25 of a 52 week series for the Web Pro. Past and future videos can be found here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • Designing an API on top with Java RMI and Rest APIs

    - by user1303881
    I'm working on the backend of a java web application. We have a document repository (Fedora Commons specifically) where we house xml files. I want to abstract the API of the repository internally so that we aren't tightly coupled to one product. I'd also like to give the flexibility of connecting to to a repository via Java RMI or REST APIs. I was hoping to get advice or resources on how to implement something like this. My thought it that I'd have some abstract repository class that had methods like getRecord, updateRecord, and deleteRecord. In the constructor I would pass the URI for the repository and the API method and port. This would allow some flexibility in the future so that if the REST api became more practical, but allow the flexibility or using RMI which could (should?) have better performance. Am I over thinking this or am I on the right path?

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

  • How to merge two different Makefiles?

    - by martijnn2008
    I have did some reading on "Merging Makefiles", one suggest I should leave the two Makefiles separate in different folders [1]. For me this look counter intuitive, because I have the following situation: I have 3 source files (main.cpp flexibility.cpp constraints.cpp) one of them (flexibility.cpp) is making use of the COIN-OR Linear Programming library (Clp) When installing this library on my computer it makes sample Makefiles, which I have adjust the Makefile and it currently makes a good working binary. # Copyright (C) 2006 International Business Machines and others. # All Rights Reserved. # This file is distributed under the Eclipse Public License. # $Id: Makefile.in 726 2006-04-17 04:16:00Z andreasw $ ########################################################################## # You can modify this example makefile to fit for your own program. # # Usually, you only need to change the five CHANGEME entries below. # ########################################################################## # To compile other examples, either changed the following line, or # add the argument DRIVER=problem_name to make DRIVER = main # CHANGEME: This should be the name of your executable EXE = clp # CHANGEME: Here is the name of all object files corresponding to the source # code that you wrote in order to define the problem statement OBJS = $(DRIVER).o constraints.o flexibility.o # CHANGEME: Additional libraries ADDLIBS = # CHANGEME: Additional flags for compilation (e.g., include flags) ADDINCFLAGS = # CHANGEME: Directory to the sources for the (example) problem definition # files SRCDIR = . ########################################################################## # Usually, you don't have to change anything below. Note that if you # # change certain compiler options, you might have to recompile the # # COIN package. # ########################################################################## COIN_HAS_PKGCONFIG = TRUE COIN_CXX_IS_CL = #TRUE COIN_HAS_SAMPLE = TRUE COIN_HAS_NETLIB = #TRUE # C++ Compiler command CXX = g++ # C++ Compiler options CXXFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # additional C++ Compiler options for linking CXXLINKFLAGS = -Wl,--rpath -Wl,/home/martijn/Downloads/COIN/coin-Clp/lib # C Compiler command CC = gcc # C Compiler options CFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # Sample data directory ifeq ($(COIN_HAS_SAMPLE), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" CFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" else CXXFLAGS += -DSAMPLEDIR=\"\" CFLAGS += -DSAMPLEDIR=\"\" endif endif # Netlib data directory ifeq ($(COIN_HAS_NETLIB), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" CFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" else CXXFLAGS += -DNETLIBDIR=\"\" CFLAGS += -DNETLIBDIR=\"\" endif endif # Include directories (we use the CYGPATH_W variables to allow compilation with Windows compilers) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) INCL = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --cflags clp` else INCL = endif INCL += $(ADDINCFLAGS) # Linker flags ifeq ($(COIN_HAS_PKGCONFIG), TRUE) LIBS = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --libs clp` else ifeq ($(COIN_CXX_IS_CL), TRUE) LIBS = -link -libpath:`$(CYGPATH_W) /home/martijn/Downloads/COIN/coin-Clp/lib` libClp.lib else LIBS = -L/home/martijn/Downloads/COIN/coin-Clp/lib -lClp endif endif # The following is necessary under cygwin, if native compilers are used CYGPATH_W = echo # Here we list all possible generated objects or executables to delete them CLEANFILES = clp \ main.o \ flexibility.o \ constraints.o \ all: $(EXE) .SUFFIXES: .cpp .c .o .obj $(EXE): $(OBJS) bla=;\ for file in $(OBJS); do bla="$$bla `$(CYGPATH_W) $$file`"; done; \ $(CXX) $(CXXLINKFLAGS) $(CXXFLAGS) -o $@ $$bla $(LIBS) $(ADDLIBS) clean: rm -rf $(CLEANFILES) .cpp.o: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .cpp.obj: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` .c.o: $(CC) $(CFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .c.obj: $(CC) $(CFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` The other Makefile compiles a lot of code and makes use of bison and flex. This one is also made by someone else. I am able to alter this Makefile when I want to add some code. This Makefile also makes a binary. CFLAGS=-Wall LDLIBS=-LC:/GnuWin32/lib -lfl -lm LSOURCES=lex.l YSOURCES=grammar.ypp CSOURCES=debug.cpp esta_plus.cpp heap.cpp main.cpp stjn.cpp timing.cpp tmsp.cpp token.cpp chaining.cpp flexibility.cpp exceptions.cpp HSOURCES=$(CSOURCES:.cpp=.h) includes.h OBJECTS=$(LSOURCES:.l=.o) $(YSOURCES:.ypp=.tab.o) $(CSOURCES:.cpp=.o) all: solver solver: CFLAGS+=-g -O0 -DDEBUG solver: $(OBJECTS) main.o debug.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) solver.release: CFLAGS+=-O5 solver.release: $(OBJECTS) main.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) %.o: %.cpp g++ -c $(CFLAGS) -o $@ $< lex.cpp: lex.l grammar.tab.cpp grammar.tab.hpp flex -o$@ $< %.tab.cpp %.tab.hpp: %.ypp bison --verbose -d $< ifneq ($(LSOURCES),) $(LSOURCES:.l=.cpp): $(YSOURCES:.y=.tab.h) endif -include $(OBJECTS:.o=.d) clean: rm -f $(OBJECTS) $(OBJECTS:.o=.d) $(YSOURCES:.ypp=.tab.cpp) $(YSOURCES:.ypp=.tab.hpp) $(YSOURCES:.ypp=.output) $(LSOURCES:.l=.cpp) solver solver.release 2>/dev/null .PHONY: all clean debug release Both of these Makefiles are, for me, hard to understand. I don't know what they exactly do. What I want is to merge the two of them so I get only one binary. The code compiled in the second Makefile should be the result. I want to add flexibility.cpp and constraints.cpp to the second Makefile, but when I do. I get the problem following problem: flexibility.h:4:26: fatal error: ClpSimplex.hpp: No such file or directory #include "ClpSimplex.hpp" So the compiler can't find the Clp library. I also tried to copy-paste more code from the first Makefile into the second, but it still gives me that same error. Q: Can you please help me with merging the two makefiles or pointing out a more elegant way? Q: In this case is it indeed better to merge the two Makefiles? I also tried to use cmake, but I gave upon that one quickly, because I don't know much about flex and bison.

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • Where to look for challenging jobs with a relaxed atmosphere?

    - by RBTree
    I'm a dev at one of the big-name tech companies. I like the job for many reasons: I do interesting work on a cool product I solve challenging problems and use a lot of high-level skills (quantitative, creative, writing, presenting) It pays well The problem is that I feel I need a more relaxed atmosphere (shorter hours, less performance pressure, and more flexibility), in order to free up time for other pursuits and reduce stress. The ideal would be a job that's around 30-35 hours a week, where there is flexibility to work more or less in a given week. Can anyone suggest where to look for a job like this, where I wouldn't have to sacrifice too much on the above points? (Obviously I would have to sacrifice pay.) My employer does not generally offer part-time employment. The closest thing I can think of is when I did summer internships at my university's CS department. The work was very intellectually challenging, but if I needed to go home a couple hours early or get flexibility on a due date, nobody batted an eyelash. However, I'd like to find out if there are alternatives to academia since from what I've seen the pay there is a gigantic drop from what I'm currently making. I've done freelance development before, but I do like that as an employee of a large company I have a lot of things taken care of for me (e.g. benefits and guaranteed stable employment).

    Read the article

  • Does your analytic solution tell you what questions to ask?

    - by Manan Goel
    Analytic solutions exist to answer business questions. Conventional wisdom holds that if you can answer business questions quickly and accurately, you can take better business decisions and therefore achieve better business results and outperform the competition. Most business questions are well understood (read structured) so they are relatively easy to ask and answer. Questions like what were the revenues, cost of goods sold, margins, which regions and products outperformed/underperformed are relatively well understood and as a result most analytics solutions are well equipped to answer such questions. Things get really interesting when you are looking for answers but you don’t know what questions to ask in the first place? That’s like an explorer looking to make new discoveries by exploration. An example of this scenario is the Center of Disease Control (CDC) in United States trying to find the vaccine for the latest strand of the swine flu virus. The researchers at CDC may try hundreds of options before finally discovering the vaccine. The exploration process is inherently messy and complex. The process is fraught with false starts, one question or a hunch leading to another and the final result may look entirely different from what was envisioned in the beginning. Speed and flexibility is the key; speed so the hundreds of possible options can be explored quickly and flexibility because almost everything about the problem, solutions and the process is unknown.  Come to think of it, most organizations operate in an increasingly unknown or uncertain environment. Business Leaders have to take decisions based on a largely unknown view of the future. And since the value proposition of analytic solutions is to help the business leaders take better business decisions, for best results, consider adding information exploration and discovery capabilities to your analytic solution. Such exploratory analysis capabilities will help the business leaders perform even better by empowering them to refine their hunches, ask better questions and take better decisions. That’s your analytic system not only answering the questions but also suggesting what questions to ask in the first place. Today, most leading analytic software vendors offer exploratory analysis products as part of their analytic solutions offerings. So, what characteristics should be top of mind while evaluating the various solutions? The answer is quite simply the same characteristics that are essential for exploration and analysis – speed & flexibility. Speed is required because the system inherently has to be agile to handle hundreds of different scenarios with large volumes of data across large user populations. Exploration happens at the speed of thought so make sure that you system is capable of operating at speed of thought. Flexibility is required because the exploration process from start to finish is full of unknowns; unknown questions, answers and hunches. So, make sure that the system is capable of managing and exploring all relevant data – structured or unstructured like databases, enterprise applications, tweets, social media updates, documents, texts, emails etc. and provides flexible Google like user interface to quickly explore all relevant data. Getting Started You can help business leaders become “Decision Masters” by augmenting your analytic solution with information discovery capabilities. For best results make sure that the solution you choose is enterprise class and allows advanced, yet intuitive, exploration and analysis of complex and varied data including structured, semi-structured and unstructured data.  You can learn more about Oracle’s exploratory analysis solutions by clicking here.

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • First Post

    - by Allan Ritchie
    It has been a while since I've had a blog, but I'm back into the open source dev and decided to get back into things.  I had a blog a few years back when NHibernate was infant (0.8 or something) and I was working with the Wilson ORMapper (www.ormapper.net) at the time.  Anyhow, I'm still working with NHibernate (particularily the exciting v3 alpha 1) and Castle framework. I've also written a .NET ExtDirect stack for which I'll be writing a few articles around due to its flexibility.  I decided to write yet another communication stack because all the implementations I found on the Ext forums were lacking any sort of flexibility.  So stay tuned... I'll be presenting a bunch of the extension points.

    Read the article

  • What can I do with dynamic typing that I can not do with static typing

    - by Justin984
    I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is why it's useful. I keep hearing about its "flexibility" but it seems like it just moves a bunch of compile time checks to runtime, which means more unit tests. This seems like an awfully big tradeoff to make for small advantages like readability and "flexibility". Can someone provide me with a real world example where dynamic typing allows me to do something I can't do with static typing?

    Read the article

  • Programmers and Database Professionals in Performance Based Companies

    - by swisscheese
    Anybody here work for a company (or know of someone that does) in the fields of programming or anything related to DBs and not have set work hours? Where you are paid for performance rather than how many hours you sit in a chair at the office? Any project / company I have been apart of always has pretty strict primary hours with the "great opportunity" / expectation to stay until the job is done. Is this type of flexibility really feasible in a group environment in these fields? Would pay for performance work within a company in these fields? With having strict primary hours I notice a lot of inefficiencies. Some weeks or days there is only so much that can be done (for whatever the reason may be) and if your work is done it doesn't help moral to force someone to stay for 8 hrs/day or 40hrs/week if the next week they may have to pull a 60+hr work week. I know that a lot of flexibility can come from working independently or as a consultant so this question really does not encompass those types of positions.

    Read the article

  • Taming Python

    Python's flexibility lets you change the rules when necessary Python - Programming - Languages - Development Tools - Articles and Reviews

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >