Search Results

Search found 6868 results on 275 pages for 'voyager systems'.

Page 234/275 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Storing large numbers of varying size objects on disk

    - by Foredecker
    I need to develop a system for storing large numbers (10's to 100's of thousands) of objects. Each object is email-like - there is a main text body, and several ancillary text fields of limited size. A body will be from a few bytes, to several KB in size. Each item will have a single unique ID (probably a GUID) that identifies it. The store will only be written to when an object is added to it. It will be read often. Deletions will be rare. The data is almost all human readable text so it will be readily compressible. A system that lets me issue the I/Os and mange the memory and caching would be ideal. I'm going to keep the indexes in memory, using it to map indexes to the single (and primary) key for the objects. Once I have the key, then I'll load it from disk, or the cache. The data management system needs to be part of my application - I do not want to depend on OS services. Or separately installed packages. Native (C++) would be best, but a manged (C#) thing would be ok. I believe that a database is an obvious choice, but this needs to be super-fast for look up and loading into memory of an object. I am not experienced with data base tech and I'm concerned that general relational systems will not handle all this variable sized data efficiently. (Note, this has nothing to do with my job - its a personal project.) In your experience, what are the viable alternatives to a traditional relational DB? Or would a DB work well for this?

    Read the article

  • Need advice - Developing a flexible documentation system, heavily focused on localization

    - by inkedmn
    I've been charged with building a documentation system/platform. Here's a short list of the major requirements: Easily localized : This will need to support a dozen or so languages out of the gate. (the ability for non-technical personnel to add/update translations would be a big plus, though not 100% required) Flexibility in output formats : At the bare minimum, I need to output the documents (either as a whole or in selected chunks) as PDF and HTML. Bonus points for native formats like Windows Help Files. Managed and deployed via an intuitive user interface (web, ideally). I'm wondering if you folks know of any systems out there that support this type of thing already? I'm not averse to writing this from scratch, but I'd rather not reinvent the wheel if I can help it. The two major candidates I've come across thus far are DocBook and reST. The former seems to have garnered a reputation for, well, sucking. I'm unfamiliar with either, but I'm told that reST would get me a good portion of the way there. Any other suggestions? Would I be better off building this from scratch?

    Read the article

  • Sys. engineer has decided to dynamically transform all XSLs into DLLs on website build process. DLL

    - by John Sullivan
    Hello, OS: Win XP. Here is my situation. I have a browser based application. It is "wrapped" in a Visual Basic application. Our "Systems Engineer Senior" has decided to spawn DLL files from all of our XSL pages (many of which have duplicate names) upon building a new instance of the website and have the active server pages (ASPX) use the DLL instead. This has created a "known issue" in which ~200 DLL naming conflicts occur and, thus, half of our application is broken. I think a solution to this problem is that, thankfully, we're generating the names of the DLLs and linking them up with our application dynamically. Therefore we can do something kludgy like generate a hash and append it to the end of the DLL file name when we build our website, then always reference the DLL that had some kind of random string / hash appended to its name. Aside from outright renaming the DLLs, is there another way to have multiple DLLs with the same name register for one application? I think the answer is "No, only between different applications using a special technique." Please confirm. Another question I have on my mind is whether this whole idea is a good practice -- converting our XSL pages (which we use in mass -- every time a response from our web app occurs) into DLL functions that call a "function" to do what the XSL page did via an active server page (ASPX), when we were before just sending an XML response to an XSL page via aspx.

    Read the article

  • Why trigger query fails in the sqlite with Qt?

    - by dexterous_stranger
    I am a beginner in the SQL. I am using SQLite, Qt - on embedded systems. I want to put a trigger here. The trigger is that whenever the primary key Id is greater than 32145, then channelNum=101 should be set. I want to set the attrib name - text also, but I got the compilation issue. I believe that the setting of trigger is the part of DDL - Data definition language. Please let me know that if I am wrong here. Here is my create db code. I get the sql query error. Also please do suggest how to set the text attrib = "COmedy". /** associate db with query **/ QSqlQuery query ( m_demo_db ); /** Foreign keys are disabled by default in sqlite **/ /** Here is the pragma to turn them on first **/ query.exec("PRAGMA foreign_keys = ON;"); if ( false == query.exec()) { qDebug()<<"Pragma failed"; } /** Create Table for storing user preference LCN for DTT **/ qDebug()<<"Create Table postcode.db"; query.prepare(" CREATE TABLE dttServiceList (Id INTEGER PRIMARY KEY, attrib varchar(20), channelNum integer )" ); if ( false == query.exec()) { qDebug()<<"Create dttServiceList table failed"; } /** Try placing trigger here **/ triggerQuery = "CREATE TRIGGER upd_check BEFORE INSERT ON dttServiceList \ FOR EACH ROW \ BEGIN \ IF Id > 32145 THEN SET channelNum=101; \ END IF; \ END; "; query.prepare(triggerQuery); if ( false == query.exec()) { qDebug()<<"Trigger failed !!"; qDebug() << query.lastError(); } Also, how to set the text name in the trigger - I want to SET attrib = "Comedy".

    Read the article

  • data ownership and performance

    - by Ami
    We're designing a new application and we ran into some architectural question when thinking about data ownership. we broke down the system into components, for example Customer and Order. each of this component/module is responsible for a specific business domain, i.e. Customer deals with CRUD of customers and business process centered around customers (Register a n new customer, block customer account, etc.). each module is the owner of a set of database tables, and only that module may access them. if another module needs data that is owned by another module, it retrieves it by requesting it from that module. so far so good, the question is how to deal with scenarios such as a report that needs to show all the customers and for each customer all his orders? in such a case we need to get all the customers from the Customer module, iterate over them and for each one get all the data from the Order module. performance won't be good...obviously it would be much better to have a stored proc join customers table and orders table, but that would also mean direct access to the data that is owned by another module, creating coupling and dependencies that we wish to avoid. this is a simplified example, we're dealing with an enterprise application with a lot of business entities and relationships, and my goal is to keep it clean and as loosely coupled as possible. I foresee in the future many changes to the data scheme, and possibly splitting the system into several completely separate systems. I wish to have a design that would allow this to be done in a relatively easy way. Thanks!

    Read the article

  • Is CDS a valid analogy for pointers? [closed]

    - by Flinkman
    So.. bear with me. I just found an analogy to c++ pointers and CDS. This clip describes CDS(Credit Default Swaps). http://www.youtube.com/watch?v=KPNdYtrlgaU#t=120s "Here we know we have an instrument of a particular financial instrument that is demonstrably dangerous, it creates long chains of risk which are vulnerable to the failure of individual trader or market partipants, in that chain and these instruments in an affect permit the creation of vicious spirals. In which the CDS price interact with the bound price, the market price and you can have a downward spiral." What my ears are telling me: "Don't create dependences that will create long chains of crashing systems." Update: Trying to clarify with something that is closer to the readers. If I change the words: instrument = construct financial = language trader = object market partipants = c structs CDS price = uptime bound price = outcome market price = ROI(return on incestment) The quote become more understandable. Look: "Here we know we have construct of a particular language construct that is demonstrably dangerous, it creates long chains of risk which are vulnerable to the failure of individual object or structs in that chain and these system in an affect permit the creation of vicious spirals. In which the uptime interact with the outcome, the ROI and you can have a downward spiral."

    Read the article

  • Can Visual Studio (should it be able to) compute a diff between any two changesets associated with a

    - by Hamish Grubijan
    Here is my use case: I start on a project XYZ, for which I create a work item, and I make frequent check-ins, easily 10-20 in total. ALL of the code changes will be code-read and code-reviewed. The change sets are not consecutive - other people check-in in-between my changes, although they are very unlikely to touch the exact same files. So ... at the en of the project I am interested in a "total diff" - as if there was a single check-in by me to complete the entire project. In theory this is computable. From the list of change sets associated with the work item, you get the list of all files that were affected. Then, the algorithm can aggregate individual diffs over each file and combine them into one. It is possible that a pure total diff is uncomputable due to the fact that someone else renamed files, or changed stuff around very closely, or in the same functions as me. I that case ... I suppose a total diff can include those changes by non-me as well, and warn me about the fact. I would find this very useful, but I do not know how to do t in practice. Can Visual Studio 2008/2010 (and/or TFS server) do it? Are there other source control systems capable of doing this? Thanks.

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Which DVCS would work best on Windows for my scenario?

    - by PoorLuzer
    At work I use ClearCase and SourceSafe, but have found some time to do some time to code for myself enroute thanks to a disposable laptop. However, I wish I had a lightweight VCS on my system using which I would be able to make changes to my code during the commute and then push/grab them from my Linux systems. I use git on my home system, but I can't really get it working on Windows. I don't want all that cygwin hack. If it does not run natively on Windows, it just won't do. What have you guys tried on your Windows system? Something that YOU use. The big player at the moment seems to be Mercurial? What would be best for a one (or maybe two) man team? I just need to maintain : Versioned copies of source code. Checking in and out should be as less obtrusive as possible. I am looking forward to a multiple Undo kind of feature (like that in an EMacs buffer) but persistent. I really like the way git keeps track of lines moving between files in a source code set I should be able to move part(s)/sub tree(s) of the source tree (each sub tree implies a module/plugin to my the main software I am building) to an archival system either completly or partially and restore them back from the archive as and when required and the system should track any changes to this tree as well. I actually want to experiment with my code as much as possible without me manually keeping track of what I modified and what I need to undo once I try out some idea, so that I am back to where I want to continue from. Notes : A similar topic came up a year ago : http://stackoverflow.com/questions/4670/dvcs-choices-whats-good-for-windows I hope things have changed, and I really want people to share their own, real life experiences. Not something they recommend without using it or they think will work.

    Read the article

  • Tools for debugging when debugger can't get you there?

    - by brian1001
    I have a fairly complex (approx 200,000 lines of C++ code) application that has decided to crash, although it crashes a little differently on a couple of different systems. The trick is that it doesn't crash or trap out in debugger. It only crashes when the application .EXE is run independently (either the debug EXE or the release EXE - both behave the same way). When it crashes in the debug EXE, and I get it to start debugging, the call stack is buried down into the windows/MFC part of things, and isn't reflecting any of my code. Perhaps I'm seeing a stack corruption of some sort, but I'm just not sure at the moment. My question is more general - it's about tools and techniques. I'm an old programmer (C and assembly language days), and a relative newcomer (couple/few years) to C++ and Visual Studio (2003 for this projecT). Are there tricks or techniques anyone's had success with in tracking down crashing issues when you cannot make the software crash in a debugger session? Stuff like permission issues, for example? The only thing I've thought of is to start plugging in debug/status messages to a logfile, but that's a long, hard way to go. Been there, done that. Any better suggestions? Am I missing some tools that would help? Is VS 2008 better for this kind of thing? Thanks for any guidance. Some very smart people here (you know who you are!). cheers.

    Read the article

  • How can I create a rules engine without using eval() or exec()?

    - by Angela
    I have a simple rules/conditions table in my database which is used to generate alerts for one of our systems. I want to create a rules engine or a domain specific language. A simple rule stored in this table would be..(omitting the relationships here) if temp > 40 send email Please note there would be many more such rules. A script runs once daily to evaluate these rules and perform the necessary actions. At the beginning, there was only one rule, so we had the script in place to only support that rule. However we now need to make it more scalable to support different conditions/rules. I have looked into rules engines , but I hope to achieve this in some simple pythonic way. At the moment, I have only come up with eval/exec and I know that is not the most recommended approach. So, what would be the best way to accomplish this?? ( The rules are stored as data in database so each object like "temperature", condition like "/=..etc" , value like "40,50..etc" and action like "email, sms, etc.." are stored in the database, i retrieve this to form the condition...if temp 50 send email, that was my idea to then use exec or eval on them to make it live code..but not sure if this is the right approach )

    Read the article

  • Is there an user-level accessible font table present in Linux?

    - by youngdood
    Hi again Stackoverflow! Since there is this: http://en.wikipedia.org/wiki/Code_page_437 For MSDOS, is there something similar for Linux systems? Is it possible to access that font data via userland program? I would actually just need an access to the actual bit patterns which define the font, and I would do the rendering myself. I'm fairly sure that something like this exists, but I haven't been able to find what exactly is it and how to access it. After all, e.g. text mode console font has to reside somewhere, and I really do hope it is "rawly" accessible somehow for a userland program. Before I forget, I'm programming my program in C, and have access only to the "standard" linux/posix development headers. The only thing I could came up with myself is to use the fonts in /usr/share/fonts, but having to write my own implementations to extract the data from there doesn't sound really an option; I would really want to achieve this with the least amount of bytes possible, so I feel I'm left with finding a standard way of doing this. It's not really feasible for me to store my own 8x8 ASCII-compatible font with the program either(it takes some 1024 bytes(128 chars * 8x8 bits) just to store the font, which is definitely unacceptable for the strict size limits(some < 1024 bytes for code+data) which I am working with), so being able to use the font data stored at the system itself would greatly simplify my task.

    Read the article

  • What job title should be most suitable for my object in resume and what salary range should I expect

    - by user354177
    I was a classic asp developer in 2000. After a year of full-time employment, I left the field. I found a part-time position as an asp developer again in 2005 and taught myself vb.net. In 2007, I got the current full-time job as an Asp.net web developer. I taught myself C#, LING t0 SQL, Web Services, AJAX, and creating all kinds of reports with reporting services. One and half years ago, I sent myself to part-time graduate program in Database and Web Systems. I'll have two semesters to go and so far my GPA is 4.0/4.0. My job responsibility is to collect business requirements from other departments, design the database, write stored procedures, create aspx pages, and create reports. I love what I do and want to advance my career to the next level. What I enjoy most is to design the relational database. I would want to become an .Net Architect eventually. I got an interview. They were looking for asp.net web developer. But I was surprised and disappointed that position would only create aspx pages. I would not even have opportunity to write stored procedures, let alone design the database (those would be provided by another group). Furthermore, they asked me some detailed questions about web forms, some of which I did not know the answers. they might be disappointed as well. I am eager to learn and can apply what I learn to real projects right away. I believe no matter what specific skills I am lacking for a new position, I can catch up quickly. I am looking for $70k range job. The object in my resume is Experience C# Web Application Developer. Due to the experience from last interview, I wonder if the object is really what I want. Could somebody answer my questions? Thank you.

    Read the article

  • Compiling 32-bit Program on VS 2008

    - by gordonwd
    I've been developing on VC++ 2003 on an XP PC but am now on Windows 7 and bought a cheap legal copy of VS 2008 to continue work on the same project. My product has to continue to run on customers' XP systems, so I'm strictly interested in a 32-bit executable. The first issue I ran into was the PRJ0003 error "spawning cl.exe". I had to add the path to this file to the VC++ Directories settings (it appears in both a bin\amd64 and bin\x86_amd64 directory, but I don't think it matters output-wise which I use?). The issue I now have (not counting a tedious cleanup to convert strcpy to strcpy_s, etc.) is that I'm not clear on whether I'm generating a 32-bit or 64-bit exe out of this. My project properties are set to a target of "Win32", so I assume that all is well. Is this correct? I have read some discussions about this, but it's never quite clear if they are talking about whether the compiler itself is running x64 vs. x86, or whether the compiled code is x64 vs. x86, and how this is differentiated. So am I doing the right thing to generate a 32-bit, Win32, x-86 program?

    Read the article

  • Where to create/keep secret files for license information/trials on Windows/Mac OS X/Linux?

    - by BastiBense
    I'm writing a commercial product which uses a simple registration mechanism and allows the user to use the application for a demo period before purchasing. My application must somewhere store the registration information (if entered) and/or the date of the first launch to calculate if the user is still within the demo/trail period. While I'm pretty much finished with the registration mechanism itself, I now have to find a good way to store the registration information on the user's disk. The most obvious idea would be to store the trial period in the preferences file, but since user tend to delete/tinker with those from time to time, it might be a good idea to keep the registration information in a separate, more hidden file. So here's my question: What is the best place/strategy to keep and create such hidden files on Windows, Mac OS X and Linux? Here is what came to my mind so far: Linux/Mac OS X Most Unix-like systems are rather locked down when it comes to places a user can write files to. In most cases this is only the /tmp directory and the user's home directory. I guess the easiest here is probably to create a file with a dot-prefix to make it less visible, then give it a name that won't make it obvious that it's associated with my application. Windows Probably much like Linux/Mac OS X - more recent Windows versions become more restrictive when it comes to file system permissions. Anyway, I'd like to hear your ideas and thoughs. Even better if you have already implemented something similar in the past. Thanks! Update For me the places for such files is more relevant than the discussion of the question if this way for copy protection is good or bad.

    Read the article

  • Creating a new window that stays on top even when in full screen mode (Qt on Linux)

    - by Lorenz03Tx
    I'm using Qt 4.6.3, and ubuntu linux on an embedded target. I call dlg->setWindowState(Qt::WindowFullScreen); on my windows in my application (so I don't loose any real-estate on the touch screen to task bar and status panel on the top and bottom of the screen. This all works fine and as expected. The issue comes in when I want to popup the on screen keyboard to allow the user to input some data. I use m_keyProc= new QProcess(); m_keyProc->start("onboard -s 640x120"); This pops up the keyboard but it is behind the full screen window. The onbaord keyboards preferences are set such that it is always on top, but that seems to actually mean "except for full screen windows". I guess that makes sense and probably meets most use cases, but I need it to be really on top. Can I either A) Not be full screen mode (so the keyboard works) and programmatically hide the task bars? or B) Force the keyboard to be on top despite my full screen status? Note: On windows we call m_keyProc->start("C:\\Windows\\system32\\osk.exe"); and the osk keyboard is on top despite the full screen status. So, I'm guessing this is a difference in window mangers on the different operating systems. So do I need to set some flag on the window with the linux window manager?

    Read the article

  • How do I conditionally compile C code snippets to my Perl module?

    - by mobrule
    I have a module that will target several different operating systems and configurations. Sometimes, some C code can make this module's task a little easier, so I have some C functions that I would like to bind the code. I don't have to bind the C functions -- I can't guarantee that the end-user even has a C compiler, for instance, and it's generally not a problem to failover gracefully to a pure Perl way of accomplishing the same thing -- but it would be nice if I could call the C functions from the Perl script. Still with me? Here's another tricky part. Just about all of the C code is system specific -- a function written for Windows won't compile on Linux and vice-versa, and the function that does a similar thing on Solaris will look totally different. #include <some/Windows/headerfile.h> int foo_for_Windows_c(int a,double b) { do_windows_stuff(); return 42; } #include <path/to/linux/headerfile.h> int foo_for_linux_c(int a,double b) { do_linux_stuff(7); return 42; } Furthermore, even for native code that targets the same system, it's possible that only some of them can be compiled on any particular configuration. #include <some/headerfile/that/might/not/even/exist.h> int bar_for_solaris_c(int a,double b) { call_solaris_library_that_might_be_installed_here(11); return 19; } But ideally we could still use the C functions that would compile with that configuration. So my questions are: how can I compile C functions conditionally (compile only the code that is appropriate for the current value of $^O)? how can I compile C functions individually (some functions might not compile, but we still want to use the ones that can)? can I do this at build-time (while the end-user is installing the module) or at run-time (with Inline::C, for example)? Which way is better? how would I tell which functions were successfully compiled and are available for use from Perl? All thoughts appreciated!

    Read the article

  • Where is a small, simple CMS that has no Front End done in PHP?

    - by user559469
    The keys are: small and simple PHP MySql no Front End By "no front end" I mean literally, I can control the look 100%. I just want a CMS on the "backend" to manage content (user login/security, upload images, udate articles, etc.) that will not dictate in anyway how the managed data is presented. Maybe it just keeps the info in a (MySql) database (which I can query and extract myself) or if it writes content, it is in super-clean xhtml fragments or even just xml I will parse myself? I have looked at Wordpress -- and don't like the code it generates, not to mention the sites look too "canned" (you can usually spot a WP site a mile a way.) Joomla and Drupal look more customizable, but they are bloated now in my opinion, and really I just want something lightweight and simple. For one-user mom-and-pop sites. (No tiered publishing/approval systems, and all that.) I envision plugging this CMS into existing websites/web apps where most of the site is made and managed by me, but a few choice areas are managed by the site owner.

    Read the article

  • Conceptual data modeling: Is RDF the right tool? Other solutions?

    - by paprika
    I'm planning a system that combines various data sources and lets users do simple queries on these. A part of the system needs to act as an abstraction layer that knows all connected data sources: the user shouldn't [need to] know about the underlying data "providers". A data provider could be anything: a relational DBMS, a bug tracking system, ..., a weather station. They are hooked up to the query system through a common API that defines how to "offer" data. The type of queries a certain data provider understands is given by its "offer" (e.g. I know these entities, I can give you aggregates of type X for relationship Y, ...). My concern right now is the unification of the data: the various data providers need to agree on a common vocabulary (e.g. the name of the entity "customer" could vary across different systems). Thus, defining a high level representation of the entities and their relationships is required. So far I have the following requirements: I need to be able to define objects and their properties/attributes. Further, arbitrary relations between these objects need to be represented: a verb that defines the nature of the relation (e.g. "knows"), the multiplicity (e.g. 1:n) and the direction/navigability of the relation. It occurs to me that RDF is a viable option, but is it "the right tool" for this job? What other solutions/frameworks do exist for semantic data modeling that have a machine readable representation and why are they better suited for this task? I'm grateful for every opinion and pointer to helpful resources.

    Read the article

  • VB.Net Memory Issue

    - by Skulmuk
    We have an application that has some interesting memory usage issues. When it first opens, the program uses aroun 50-60MB of memory. This stays consistent on 32-bit machines. On 64-bit machines, however, re-activating the form in any way (clicking, dragging, alt-tabbing, etc.) adds around another 50MB to it's memory usage. It repeats this process several times before resetting back to around 45MB, at which point the cycle begins again. I've done some research and a lot of people have said that VB in general has pretty poor garbage collection, which could be affecting the software in some way. However, I've yet to find a solution. There are no events fired when the application is activated (as shown by 32-bit usage) - the applications is merely sitting awaiting the user's actions. At load, the system pulls some data into a tree view, but that's the only external connection, and it only re-fires the routine when the user makes a change to something and saves the change. Has anyone else experienced anything this strange, and if so, does anyone know of what might fix it? It seems strange that it only occurs under x64 systems. Thanks

    Read the article

  • Log4j Logging to the Wrong Directory

    - by John
    I have a relatively complex log4j.xml configuration file with many appenders. Some machines the application runs on need a separate log directory, which is actually a mapped network drive. To get around this, we embed a system property as part of the filename in order to specify the directory. Here is an example: The "${user.dir}" part is set as a system property on each system, and is normally set to the root directory of the application. On some systems, this location is not the root of the application. The problem is that there is always one appender where this is not set, and the file appears not to write to the mapped drive. The rest of the appenders do write to the correct location per the system property. As a unit test, I set up our QA lab to hard-code the values for the appender above, and it worked: however, a different appender will then append to the wrong file. The mis-logged file is always the same for a given configuration: it is not a random file each time. My best educated guess is that there is a HashMap somewhere containing these appenders, and for some reason, the first one retrieved from the map does not have the property set. Our application does have custom system properties loading: the main() method loads a properties file and calls into System.setProperties(). My first instinct was to check the static initialization order, and to ensure the controller class with the main method does not call into log4j (directly or indirectly) before setting the properties just in case this was interfering with log4j's own initialization. Even removing all vestiges of log4j from the initialization logic, this error condition still occurs.

    Read the article

  • Is it possible to submit data into a SQL database, wait for that to finish, and then return the ID g

    - by user322478
    I have an ASP form that needs to submit data to two different systems. First the data needs to go into an MS SQL database, which will get an ID. I then need to submit all that form data to an external system, along with that ID. Pretty much everything in the code works just fine, the data goes into the database, and the data will go to the external system. The problem is I am not getting my ID back from SQL when I execute that query. I am under the impression this is happening because of how fast everything occurs in the code. The database is adding it's row at the same time my post page runs it's query to get the ID back, I think. I need to know of a way to wait until SQL finished the insert or wait for a specific amount of time maybe. I already tried using the hacks to "sleep" with ASP, that did not help. I am sure I could accomplish this in .Net, my background is more .Net than ASP, but this is what I have to work with on my current project. Any ideas?

    Read the article

  • Webcast Q&A: Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    This week we had the fifth webcast in our WebCenter in Action webcast series, "Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter", where customers Giovani Dacumos and Minh Ong from the Los Angeles Department of Building & Safety (LADBS), and Sheetal Paranjpye and Rajiv Desai from Oracle Partner 3Di, shared how Oracle WebCenter is powering LADBS' externally facing website and providing a superior self-service experience for their customers. We asked the speakers to provide some dialogue for Q&A.   Giovani Dacumos, Director of Systems and Minh Ong, LADBS Q: Did you run into any issues when integrating all of the different applications together?A: Yes. We did have issues integrating a secure sign on between the portal and other legacy applications. We used portlets and iframes to overcome those.  This is a new technology for us and we are also learning as we go so there were a lot of challenges in developing and implementing our vision. Q: What has been the biggest benefit your end users have seen?A: The biggest benefit for our ends users is ease-of-use. We've given them a system that provided a new and improved source of information, as well as a very organized flow of transaction processing. It has made our online service very user friendly. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: There was no internal resistance during the implementation, only challenges. As mentioned earlier, this is a new technology for us. We've come across issues that needed assistance from Oracle. Working with 3Di and Oracle has helped us tremendously to find solutions to our implementation issues. Q: Given the performance, what do you estimate to be the top end capacity of the system? A: With the current performance and architecture we have, we are able to support approx 300-400 concurrent users.  We would need more hardware to support additional user load. Q: What's the overview or summary of feedback from the users interacting with the site?A: LADBS has a wide spectrum of customers, from simple users like homeowners to large construction firms. Anything new that we offer could be a little bit challenging for some, but overall, the customers liked it. They saw a huge improvement on the usability. Q: Can you describe the impressions about the site before and after the project within LADBS?A: The old site was using old technology and it was hard for us to keep on building into it as we got more business requirements. It made our application seem a bit complicated.  It was confusing for our new customers to use and we've improved on this with the new site. It's now easier for them to complete their transactions and, at the same time, allowed us to provide more useful information. Sheetal Paranjpye and Rajiv Desai, 3Di Q: Did you run into any obstacles when implementing the solution?A: Yes we did run into some obstacles. One of the key show stoppers was the issue with portlet to portal communication. The GIS viewer (portlet) needed information to be passed  to and from Permit LA (Portal), but we were able to get everything configured and up and working quickly! Q: Was there a lot of custom work that needed to be done for this particular solution?A: We have done some customizations where workflows/ Task flows are involved.  Q: What do you think were the keys to success for rolling out WebCenter?A: Having a service oriented architecture and using portlets have been the key areas for rolling out Oracle WebCenter at LADBS. The Oracle WebCenter Content integration allows the flexibility to business users to maintain the content, which has really cut down on the reliance of IT, and employee productivity has increased as a result. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action! Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Oracle Customer Reference Forum – Apex IT – Oracle Sales Cloud

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Apex IT, an Oracle Platinum Partner, wins Nucleus Research's ROI Award with a 724% return. Learn how you can improve your ROI with Oracle Sales and Marketing Cloud. We are pleased to invite you to a discussion with Apex IT on industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and benefits achieved since going live. Apex IT works with clients large and small, assisting them at all stages in the process: organizing ideas and developing strategies, selecting the most appropriate package, implementing it for best results, and keeping systems optimized with long-term support. Please plan to register at least three hours prior to the event taking place in order to participate and get the dial-in information associated in due time. Speakers: Bryan Hinz, Vice President of Business Development, Apex IT (Speaker) Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile: Since 1997, Apex IT has helped public sector, corporate and higher education clients use technology to streamline their processes and increase productivity and profitability. Based on products and best practices from Oracle our experts provide a full range of enterprise solutions including CX/CRM and related applications that support marketing, sales, and service; HR and HR Helpdesk; and Business Intelligence. Our project approach is results-driven and our attitude is people-focused. Industry: Professional Services Products/Services: Oracle Sales Cloud Organization Website: http://apexit.com/ Event Description: In this informal reference call, you will have the opportunity to hear Apex IT discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and benefits achieved since going live. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call. Why Oracle: Apex IT needed a mobile-enabled sales force automation tool that could promote account collaboration and integrate with Microsoft Outlook. Oracle Sales Cloud met these needs and Apex IT’s requirements for: Improved collaborative selling Improved quality of customer engagement and information Improved business development Improved pipeline management Please plan to register at least three hours prior to the event taking place in order to participate and get the dial-in information associated in due time. After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Apex IT will revise the registrants list and may dismiss registrations as they see fit. Note: To access more information at the corporate site you would need an Oracle.com account. If you do not already have an account, getting one is easy and free. Click on the link and you will be prompted to create an account. After you have created your account, you will be automatically returned to the full page description of this event. Register Now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion

    - by pinaldave
    Microsoft Tech-Ed India 2010 is considered as the major Technology event of the year for various IT professionals and developers. This event will feature a comprehensive forum in order   to learn, connect, explore, and evolve the current technologies we have today. I would recommend this event to you since here you will learn about today’s cutting-edge trends, thereby enhancing your work profile and getting ahead of the rest. But, the most important benefit of all might be the networking opportunity that that you can attain by attending the forum. You can build personal connections with various Microsoft experts and peers that will last even far beyond this event! It also feels good to let you know that I will be speaking at this year’s event! So, here are the sessions that await you in this mega-forum. Session 1: True Lies of SQL Server – SQL Myth Buster Date: April 12, 2010  Time: 11:15pm – 11:45pm In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myth and their resolution backing up with some demo. This demo session is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet  fun session. Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Panel Discussion: Harness the power of Web – SEO and Technical Blogging Date: April 12, 2010 Time: 5:00pm-6:00pm Here you will learn lots of tricks and tips about SEO and Technical Blogging from various Industry Technical Blogging Experts. This event will surely be one of the most important Tech conventions of 2010. TechEd is going to be a very busy time for Tech developers and enthusiasts, since every evening there will be a fun session to attend. If you are interested in any of the above topics for every session, I suggest that you visit each of them as you will learn so many things about the topic to be discussed. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >