Search Results

Search found 12506 results on 501 pages for 'business transaction mana'.

Page 58/501 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • svn import, dont modify revision OR modify the list of files in a transaction

    - by Vaughan Durno
    Hi Ive gained so much knowledge/insight from this site in the past few years, now im actually hoping to get some enlightenment. The scenario is as follows: You have the general structure of the repo (trunk,branches,tags) but added to the layout you have another directory called 'db_revs'. Now in the pre-commit, you take a dump of a specific database (the specifics are irrelevant) into a temporary file, say /tmp/REV.sql (REV being the HEAD revision number of the repo, or the transaction). K all is well and you can just import that temp file into the repo at /db_revs/REV.sql Now obviously that import, even tho its happening during a commit, increments the revision of the repo. So when u do a commit at some point to say 'test.php' in the trunk and it completes at say revision 159, then the pre-commit runs as it should and the DB dump gets imported but then u r sitting with a tree in the repo-browser where 'trunk' is at revision 159, and 'db_revs', which has the imported dump, is at 158 (Ive made it so that the filename matches the revision ie: 159.sql but that file is then at revision 158). NB If you're doing an import in a pre-commit, you need to add some logic to not perform the import, say by checking first for the existence of the temp file, otherwise it will cause, um, a stack overflow and your PC will quickly crawl to a stand still So I wanted to know if it was possible to make an import to not commit its changes. I realise I might be barking up the wrong tree to begin with so I have another idea of doing this so that brings me to the 2nd part of my question, would it be possible to modify the list of files that the transaction is about to commit to the repo. I know this can be done to a WC but that wont help as a WC is a checked out copy of say the trunk so im not sure how u would add a file to the 'db_revs' folder which is above trunk? Any help is greatly appreciated Cheers Vaughan

    Read the article

  • Spring - Transaction Readonly

    - by AAK
    Hello Gurus! Just wanted your expert opinions on declarative transaction management for Spring. Here is my setup - A. DAO Layer is Plain old JDBC using jdbcTemplete (No hibernate etc) B. Service Layer is POJO with declarative trasnactions as follows - save*, readonly=false, rollback for Throwable Things work fine with above setup. However when I say get*, readonly=true I see errors in my log file saying - Database connection cannot be marked as readonly. This happens for all get* methods in Service Layer. Now my questions - A. Do I have to say get* as readonly? All my get* methods are pure read DB operations. I do not wish to run them in any transaction context. How serious is the above error? B. When I remove the get* confiiguration, I do not see the errors, morever all my simple get* operations are performed without transactions. Is this the way to go? C. Why would anyone want to have transactional methods where readonly = true? Is there any practical significance of this configuration? Thank you! As always your resposes are much appreciated!

    Read the article

  • activemerchant PayPalExpress transaction is invalid

    - by Ameya Savale
    I am trying to integrate activemerchant into my ruby on rails application. This is my controller where I get the purchase attirbutes and create a PaypalExpressResponse object def checkout total_as_cents, purchase_params = get_setup_params(Schedule.find(params[:schedule]), request) setup_response = @gateway.setup_purchase(total_as_cents, purchase_params) redirect_to @gateway.redirect_url_for(setup_response.token) end @gateway is my PaypalExpressGateway object which I create using this method in my controller def assign_gateway @gateway = PaypalExpressGateway.new( :login => api_user, :password => api_pass, :signature => api_signature ) end I got the api_user, api_pass, and api_signature values from my developer.paypal.com account, when I logged in for the first time there was already a sandbox user created as a merchant which is where I got the api credentials from. And finally here is my get_setup_params method: def get_setup_params(schedule, request) purchase_params = { :ip => request.remote_ip, :return_url => url_for(:action => 'review', :only_path => false, :sched => schedule.id), :cancel_return_url => register_path, :allow_note => true, :item => schedule.id } return to_cents(schedule.fee), purchase_params end How ever when I click on the checkout button, I get redirected to a sandbox paypal page saying "This transaction is invalid. Please return to the recipient's website to complete your transaction using their regular checkout flow." I'm not sure exactly what's wrong, I think the problem lies in the credentials but don't know why. Any help will be appreciated. One other point, I'm running this in my development environment so I have put this in my config file config.after_initialize do ActiveMerchant::Billing::Base.mode = :test end UPDATE Found out what the problem was, my return cancel url was invalid instead of using register_path, I used url_for(action: "action-name", :only_path => false) this answer helped me Rails ActiveMerchant - Paypal Express Checkout Error even though I wasn't able to see the output of the response like the person has managed to do

    Read the article

  • Load Testing Java Web Application - find TPS / Avg transaction response time

    - by Steve
    I would like to build my own load testing tool in Java with the goal of being able to load test a web application I am building throughout the development cycle. The web application will be receiving server to server HTTP Post requests and I would like to find its starting transaction per second (TPS) capacity along with the avgerage response time. The Post request and response messages will be in XML (I dont' think that's really applicable though :) ). I have written a very simple Java app to send transactions and count how many transactions it was able to send in one second (1000 ms) however I don't think this is the best way to load test. Really what I want is to send any number of transactions at exactly the same time - i.e. 10, 50, 100 etc. Any help would be appreciated! Oh and here is my current test app code: Thread[] t = new Thread[1]; for (int a = 0; a < t.length; a++) { t[a] = new Thread(new MessageLoop()); } startTime = System.currentTimeMillis(); System.out.println(startTime); for (int a = 0; a < t.length; a++) { t[a].start(); } while ((System.currentTimeMillis() - startTime) < 1000 ) { } if ((System.currentTimeMillis() - startTime) > 1000 ) { for (int a = 0; a < t.length; a++) { t[a].interrupt(); } } long endTime = System.currentTimeMillis(); System.out.println(endTime); System.out.println("Total time: " + (endTime - startTime)); System.out.println("Total transactions: " + count); private static class MessageLoop implements Runnable { public void run() { try { //Test Number of transactions while ((System.currentTimeMillis() - startTime) < 1000 ) { // SEND TRANSACTION HERE count++; } } catch (Exception e) { } } }

    Read the article

  • Strange befaviour of spring transaction support for JPA + Hibernate +@Transactional annotation

    - by abovesun
    I found out really strange behavior on relatively simple use case, probably I can't understand it because of not deep knowledges of spring @Transactional nature, but this is quite interesting. I have simple User dao that extends spring JpaDaoSupport class and contains standard save method: @Transactional public User save(User user) { getJpaTemplate().persist(user); return user; } If was working fine until I've add new method to same class: User getSuperUser(), this method should return user with isAdmin == true, and if there is no super user in db, method should create one. Thats how it was looking like: public User createSuperUser() { User admin = null; try { admin = (User) getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { return em.createQuery("select u from UserImpl u where u.admin = true").getSingleResult(); } }); } catch (EmptyResultDataAccessException ex) { User admin = new User('login', 'password'); admin.setAdmin(true); save(admin); // THIS IS THE POINT WHERE STRANGE THING COMING OUT } return admin; } As you see code is strange forward and I was very confused when found out that no transaction was created and committed on invocation of save(admin) method and no new user wasn't actually created despite @Transactional annotation. In result we have situation: when save() method invokes from outside of UserDAO class - @Transactional annotation counted and user successfully created, but if save() invokes from inside of other method of the same dao class - @Transactional annotation ignored. Here how I was change save() method to force it always create transaction. public User save(User user) { getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { em.getTransaction().begin(); em.persist(user); em.getTransaction().commit(); return null; } }); return user; } As you see I manually invoke begin and commit. Any ideas?

    Read the article

  • How to use Transaction in Entity FrameWork?

    - by programmerist
    How to use Transaction in Entity FrameWork? i read some links on Stackoverflow : http://stackoverflow.com/questions/815586/entity-framework-using-transactions-or-savechangesfalse-and-acceptallchanges BUT; i have 3 table so i have 3 entities: CREATE TABLE Personel (PersonelID integer PRIMARY KEY identity not null, Ad varchar(30), Soyad varchar(30), Meslek varchar(100), DogumTarihi datetime, DogumYeri nvarchar(100), PirimToplami float); Go create TABLE Prim (PrimID integer PRIMARY KEY identity not null, PersonelID integer Foreign KEY references Personel(PersonelID), SatisTutari int, Prim float, SatisTarihi Datetime); Go CREATE TABLE Finans (ID integer PRIMARY KEY identity not null, Tutar float); Personel, Prim,Finans my tables. if you look Prim table you can see Prim value float value if i write a textbox not float value my transaction must run. using (TestEntities testCtx = new TestEntities()) { using (TransactionScope scope = new TransactionScope()) { // do someyihng... testCtx.Personel.SaveChanges(); // do someyihng... testCtx.Prim.SaveChanges(); // do someyihng... testCtx.Finans.SaveChanges(); scope .Complete(); success = true; } } How can i do that?

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • Identifying mail account used in CRAM-MD5 transaction

    - by ManiacZX
    I suppose this is one of those where the tool for identifying the problem is also the tool used for taking advantage of it. I have a mail server that I am seeing emails that spam is being sent through it. It is not an open relay, the messages in question are being sent by someone authenticating to the smtp with CRAM-MD5. However, the logs only capture the actual data passed, which has been hashed so I cannot see what user account is being used. My suspicion is a simple username/password combo or a user account's password has otherwise been compromised, but I cannot do much about it without knowing what user it is. Of course I can block the IP that is doing it, but that doesn't fix the real problem. I have both the CRAM-MD5 Base64 challenge string and the hashed client auth string containing the username, password and challenge string. I am looking for a way to either reverse this (which I haven't been able to find any information on) or otherwise I suppose I need a dictionary attack tool designed for CRAM-MD5 to run through two lists, one for username and one for password and the constant of the challenge string until it finds a matching result of the authentication string I have logged. Any information on reversing using the data I have logged, a tool to identify it or any alternative methods you have used for this situation would be greatly appreciated.

    Read the article

  • Small Business Server 2008 - Microsoft Windows Search or Microsoft Search Server 2020 Express

    - by Christopher Edwards
    See Also - Small (Business) Server - Microsoft Windows Search or Microsoft Search Server 2008 Express Can anyone tell me if they have Search Server Express 2010 Beta working on Small Business Server 2010, or indeed if it is supported. The only reference I can find is here, but given how scant it is I'm not sure I should trust it:- http://social.technet.microsoft.com/Forums/en/sharepoint2010setup/thread/12cf9846-b940-4441-9fc1-30016ea87e5c

    Read the article

  • SQL Server Transaction Log RAID

    - by Eric Maibach
    We have three SQL Server servers, and each server has a about five or six databases on it. We are in the process of moving these servers to a new SAN and I am working on the best RAID configuration. Currently all of the log files for all of the databases share a RAID array, there is nothing else on this RAID array except for the log files, but all of the databases use this same array for their log files. I have read that it is best to have log files on separate disks. But in our case I am not sure whether it would be best to have one big array with about 8 drives that all the log files are on. Or would it be better to create four two disk arrays and give some of the larger databases their own dedicated disks for their log files?

    Read the article

  • SQLAuthority News – Free eBook Download – Introducing Microsoft SQL Server 2008 R2

    - by pinaldave
    Microsoft Press has published FREE eBook on the most awaiting release of SQL Server 2008 R2. The book is written by Ross Mistry and Stacia Misner. Ross is my personal friend and one of the most active book writer in SQL Server Domain. When I see his name on any book, I am sure that it will be high quality and easy to read book. The details about the book is here: Introducing Microsoft SQL Server 2008 R2, by Ross Mistry and Stacia Misner The book contains 10 chapters and 216 pages. PART I   Database Administration CHAPTER 1   SQL Server 2008 R2 Editions and Enhancements CHAPTER 2   Multi-Server Administration CHAPTER 3   Data-Tier Applications CHAPTER 4   High Availability and Virtualization Enhancements CHAPTER 5   Consolidation and Monitoring PART II   Business Intelligence Development CHAPTER 6   Scalable Data Warehousing CHAPTER 7   Master Data Services CHAPTER 8   Complex Event Processing with StreamInsight CHAPTER 9   Reporting Services Enhancements CHAPTER 10   Self-Service Analysis with PowerPivot More detail about the book is listed here. You can download the ebook in XPS format here and in PDF format here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • What do you think are the biggest software development issues, in small to medium businesses?

    - by Ron-Damon
    Hi, I own a small software development company that developes Web software to other small and medium companies in Chile. The business process is very complex and it is hard to stablish where to put the efforts to make our company better, more efficient, and give better solutions. I'm also a TI master's degree student and i'm making a paper about this subject, so any help would be great to help my company and my paper. I have considered 3 areas for the problems: 1) Software development problems 2) Web development problems 3) Small and Medium companies problems I don't know about you, but at least this "business formula" in Chile has not received very much support but it is getting better, but today my company is far from being self-sufficient. UPDATE: Thanks guys for your support so far, i'm updating because i have somewhat enough information so i decided to go deeper within the subjects, wish i would like you to consider for your next answers/commentaries on the subject: 1) Software development problems (3) 1.1 Incomplete problem picture 1.2 Useless delivered software 1.3 Unrealistic or inadequate schedule 2) Web development problems (3) 2.1 Apparently non-viable implementation 2.2 Unefficient module construction design 2.3 Reduced result system inter-operability 3) Small and Medium companies problems (3) 3.1 Very specific, but narrowed requerired system characteristics 3.2 Developed system is not used 3.3 Positivist demand for activities in project execution There are only 3 problems for category, to deliberately keep a thiner scope. Also, i have considered that it would have been apropiated to separate the third clasification on two, but won't be doing it just now: 3) Small and Medium software developement providers problems 4) Small and Medium software developement clients problems In that case, i think i would have made the scope of the problem wider and it is not what i want to do now, until at least i'm very trough with the other two clasifications. What you think?

    Read the article

  • Is it a must to focus on one specific IT subject to be succesful?

    - by Ahmet Yildirim
    Lately I'm deeply disturbed by the thought that I'm still not devoted to one specific IT subject after so many years of doing it as a hobby. I've been in so many different IT related hobbies since I was 12. I have spent 8 years and now I'm 20 and just finished freshman year at Computer Eng. Just to summarize the variety: 3D Game Dev. and Modelling (Acknex, Irrlicht , OpenGL, GLES, 3DSMAX) Mobile App.Dev (Symbian, Maemo, Android) Electronis (Arduino) Web.Dev. (PHP, MYSQL, Javascript, Jquery, RaphaelJS, Canvas, Flash etc.) Computer Vision (OpenCV) I need to start making money. But I'm having problem to pick the correct IT business to do so. Is it a problem to have interest in so many different IT subjects?(in business world) I'm having a lot of fun by doing all those stuff from time to time. Other than making money I also noticed that having so many different interests is lowering my productivity. But I'm still having difficulty to pick one. I'm feeling close to all those subjects (time to time).

    Read the article

  • EBS 11i- Új ÁFA törvényt követo lokalizációs csomag

    - by user552636
    Mai világunkban már hozzászokhattunk, hogy minden változik. Már meg sem lep bennünket, hogy az ÁFA törvény 2013-tól szintén módosul. Jó hír E-Business Suite 11i -t használó Ügyfeleink számára, hogy elkészült az adótörvények változásához kapcsolódóan (Art. 31/B §) az Oracle E-Business Suite 11i (11.5.10) verziójához az új lokalizációs ÁFA csomag, amely a 2013-tól az "Összesíto jelentés"-hez szükséges adatok eloállítását támogatja. Az új ÁFA csomag az alábbi három új kimutatás csoportot tartalmazza, amelyek a jelenlegi ÁFA megoldáshoz hasonlóan lehetové teszik a riport eredmények elozetes megtekintését, az eredmények véglegesítését, illetve szükség esetén másolati nyomtatás készítését: - "OHU: ÁFA analitika és összesíto jelentés 2013 (Elozetes)" - "OHU: ÁFA analitika és összesíto jelentés 2013 (Végleges)" - "OHU: ÁFA analitika és összesíto jelentés 2013 (Másolat)" A kimutatás csoportokban az alábbi programok lesznek elérhetok:  - OHU: ÁFA analitika kimutatás  - OHU: Belföldi összesíto jelentés partnerenként Az "OHU: ÁFA analitika kimutatás" funkcionális szempontból nem módosul, csak az ahhoz szükséges technikai módosítások kerülnek átvezetésre, hogy a kimutatás által összegyujtött adatok az "OHU: Belföldi összesíto jelentés partnerenként" kimutatás számára elérhetok legyenek. Az újonnan megjeleno "OHU: Belföldi összesíto jelentés partnerenként" kimutatás a NAV 1365A-01-05-ös összesíto és a 1365M lapok partnerenkénti kitöltéséhez szükséges adatokat listázza, a tervek szerint Excel formátumban, az adatokat a bevallás által kért eFt-ra kerekítve. (A riport adatok Excel formátumban történo megjelenítéséhez elofeltétel az Oracle BI Publisher termék telepítése). A lokalizációs csomagban korábban elérheto:  - "OHU: Levonható ÁFA megosztási kimutatás" és a kapcsolódó  - "OHU: ÁFA Fizeto pozíció bejegyzése" programok nem kerülnek aktualizálásra, mivel használatuk korábbi adó törvény változások miatt már nem szükséges. A 2013-ra vonatkozó ÁFA bevallások készítése során már az új "OHU: ÁFA analitika és összesíto jelentés 2013" kimutatáscsoportok futtatását javasoljuk, a korábbi ÁFA kimutatás csoportokat pedig használaton kívül kell helyezni. A 2013-tól használandó új ÁFA csomag az Oracle Support szolgáltatásán keresztül érheto el. Ügyfeleink a My Oracle Support-on 1713-as termékkódra (EMEA Add-on Localizations) megnyitott hibajegyen (SR) bejelentésével jelezhetik igényüket a fentebb részletezett lokalizációs csomagra.

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Should I be looking for developers with specific skill sets or generalists that need to learn?

    - by Lostsoul
    Thanks to the great help of this site and SO, I've been able to make a prototype of a software I want to sell but unfortunately although the prototype works I think my code quality is very low. I didn't use much OOP or design patterns so although my code is understandable to me, I think a normal developer would faint if they had to read it. So I wanted to hire a developer to make it a bit more better quality and improve some of my implementations of API's that I may have not done correctly. I'm having problems hiring a developer though. I have met 2 developers and had them read my software specs.The problem is, they lacked my business's domain knowledge(which is completely understandable and no biggie) but they also lacked knowledge of the underlying tech systems I used such as Hadoop, Hbase, Cuda, etc..I spent alot of time explaining map/reduce, bigtables and other technologies I used. I thought it was common knowledge because of my interactions with people on this site but the people I met with mentioned they never had to deal with these things so they didn't know it. My question is, for software projects that are hiring contractor developers is it a danger if the developer does not have experience with the underlying technologies? or can a general developer who is accomplished in another area realistically pick up new technologies? I did a very very quick back of envelope calculation and I think the upfront costs would be similar if I hire a student or developer with no experience in my technologies who will work many hours versus hiring a highly experienced developer who charges double but finishes in half the time but what other risks should I be considering or worried about? Also, should if I do hire a generalist, should I be paying for the time it takes them to learn hadoop or cuda if they are contractors(seems to make business sense but not sure how fair it is to them if they do not use the skill again). I'm a bit confused so any suggestions would be great.

    Read the article

  • Where to ask a question about startups?

    - by Wolfpack'08
    I've got some questions about how to better run my web application development business, which has only been running for a little more than two years. It's still in its early phases, as I consider the first five years the 'early years'. Being inexperienced with business in general, I always have a lot of questions as to whether I am making the right decisions (for example, I often worry about my hiring practices, and I often worry that I may have priced new products wrongly). Is there a good site on the Stack Exchange to ask questions about things like this (for example, this site, the Project Management site, the Salesforce site, or perhaps the Personal Finance site)? I'm combing through questions and answers on each of these sites, now, and I can see questions that mimic my own. Nothing precisely the same, but things that are similar on all sites. Apart from just reading through previously asked questions, what is a good way to get a sense of whether or not my question fits on a site in the exchange? If you recommend going out of the exchange, please also let me know.

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Routing static IP traffic on a Comcast Business Class IP Gateway (SMCD3G-CCR)

    - by Jakobud
    We are in the process of replacing our firewall, which is currently the only thing connected to our Comcast Business Class modem. Comcast gives us 5 static IP addresses. Currently, all traffic to all 5 static IPs goes directly to the existing firewall. Eventually, obviously all traffic will goto the new firewall, once the old firewall is removed from the network. But in the meantime, as we will have two firewalls plugged into the same Comcast modem, I need to route certain traffic to the new firewall instead of the old one. The firewall switchover is going to be slow and gradual as I am testing it, so I can't simply unplug the existing firewall and plug in the new one. So my question is, how do I tell the modem to route all traffic that goes to a specific IP to goto the new firewall instead of the old one? I've logged into the web interface for the modem, but the available options aren't very clear. There is a 1-to-1 NAT option (which I can't seem to get the interface for it to work properly) but I also see a "Static Routing" section. I always understood Static Routing to refer to routing data within the LAN though, so I'm not sure if that's what I'm looking for or not. Keep in mind, I'm not looking to do simple port forwarding. I'm wanting 100% of traffic to certain public static IPs to go to the specified connected firewall (I'll deal with service policies there). The modem is an SMC SMCD3G-CCR and is labeled as a Comcast Business Class Business IP Gateway. Any help or direction would be greatly appreciated.

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >