Search Results

Search found 12094 results on 484 pages for 'release notes'.

Page 36/484 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • What is the difference between release and nil

    - by HML
    I am new to iPhone SDK. I want to know that what is difference between release and nil. Yes, basic, I know. But my application crashing every time when I use release. If I use nil then its working fine. Here is code : cellName=[cellArray objectAtIndex:5]; UITextField *txtFieldTown = (UITextField *)[cellName.contentView viewWithTag:2]; StrTown=txtFieldTown.text; [txtFieldTown release]; txtFieldTown = nil; Here, if release line is removed, then its working fine. I know that I am not allocating txtFieldTown,so I should not worry but its retain count is 4. So I am trying to decrease that. Please help me.Thanking you...

    Read the article

  • Proper Memory Management for Objective-C Method

    - by Justin
    Hi, I'm programming an iPhone app and I had a question about memory management in one of my methods. I'm still a little new to managing memory manually, so I'm sorry if this question seems elementary. Below is a method designed to allow a number pad to place buttons in a label based on their tag, this way I don't need to make a method for each button. The method works fine, I'm just wondering if I'm responsible for releasing any of the variables I make in the function. The application crashes if I try to release any of the variables, so I'm a little confused about my responsibility regarding memory. Here's the method: FYI the variable firstValue is my label, it's the only variable not declared in the method. -(IBAction)inputNumbersFromButtons:(id)sender { UIButton *placeHolderButton = [[UIButton alloc] init]; placeHolderButton = sender; NSString *placeHolderString = [[NSString alloc] init]; placeHolderString = [placeHolderString stringByAppendingString:firstValue.text]; NSString *addThisNumber = [[NSString alloc] init]; int i = placeHolderButton.tag; addThisNumber = [NSString stringWithFormat:@"%i", i]; NSString *newLabelText = [[NSString alloc] init]; newLabelText = [placeHolderString stringByAppendingString:addThisNumber]; [firstValue setText:newLabelText]; //[placeHolderButton release]; //[placeHolderString release]; //[addThisNumber release]; //[newLabelText release]; } The application works fine with those last four lines commented out, but it seems to me like I should be releasing these variables here. If I'm wrong about that I'd welcome a quick explanation about when it's necessary to release variables declared in functions and when it's not. Thanks.

    Read the article

  • ATG Live Webcast Event - EBS 12 OAF Rich UI Enhancements

    - by Bill Sawyer
    The E-Business Suite Applications Technology Group (ATG) participates in several conferences a year, including Oracle OpenWorld in San Francisco and OAUG/Collaborate.   We announce new releases, roadmaps, updates, and other news at these events.  These events are exciting, drawing thousands of attendees, but it's clear that only a fraction of our EBS users are able to participate. We touch upon many of the same announcements here on this blog, but a blog article is necessarily different than an hour-long conference session.  We're very interested in offering more in-depth technical content and the chance to interact directly with senior ATG Development staff.  New ATG Live Webcast series -- free of charge As part of that initiative, I'm very pleased to announce that we're launching a new series of free ATG Live Webcasts jointly with Oracle University.  Our goal is to provide solid, authoritative coverage of some of the latest ATG technologies, broadcasting live from our development labs to you. Our first event is titled: The Latest E-Business Suite R12.x OA Framework Rich User Interface Enhancements This live one-hour webcast will offer a comprehensive review of the latest user interface enhancements and updates to OA Framework in EBS 12. Developers will get a detailed look at new features designed to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and web services. Topics will include new rich user interface (UI) capabilities such as: 

    Read the article

  • BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations

    - by Steven Chan
    A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year:  the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3.  There are two major options for SOA-related integrations for the E-Business Suite:Custom integrations using the Oracle Application Server (SOA) Adapter for Oracle ApplicationsPrebuilt SOA integrations for E-Business Suite using BPEL Process ManagerFor more background about these two options, please see this article:BPEL 10.1.3.5 Certified for Prebuilt E-Business Suite 12 SOA Integrations

    Read the article

  • ATG Live Webcast Feb. 24th: Using the EBS 12 SOA Adapter

    - by Bill Sawyer
    Our next ATG Live Webcast is now open for registration. The event is titled:E-Business Suite R12.x SOA Using the E-Business Suite AdapterThis live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter. While primarily focused on integrators and developers, understanding SOA capabilities is important for all E-Business Suite technologists and superusers.ATG Live Webcast Logistics The one-hour event will be webcast live with a dial-in access for Q&A with the Applications Technology Group (ATG) Development experts presenting the event. The basic information for the event is as follows:E-Business Suite R12.x SOA Using the E-Business Suite AdapterDate: Thursday, February 24, 2011Time: 8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Neeraj Chauhan, Product Manager, ATG DevelopmentNOTE: When you register for the event, the confirmation will show the event starting at 7:30 AM Pacific Standard Time. This is to allow you time to connect to the conference call and web conference. The presentation will start at 8:00 AM Pacfic Standard Time.

    Read the article

  • Offloading (Some) EBS 12 Reporting to Active Data Guard Instances

    - by Steven Chan
    For most Oracle Database users, Oracle Active Data Guard allows users to:Create a physical standby database for business continuity and disaster recoveryOffload reporting from the production database to the read-only physical standby databaseE-Business Suite customers have been able to use Active Data Guard to create physical standby databases for their EBS environments since the feature was introduced with the 11g Database.  EBS sysadmins can use the generic Active Data Guard documentation to take advantage of the Active Data Guard standby database capabilities.  I am pleased to announce that it is now possible to offload a subset of some ReportWriter-based reports -- but not all -- from a production EBS environment to an Active Data Guard physical standby database.  But before I go into the details of this newly-certified configuration, it's necessary to understand some details about what happens whenever someone attempts to access the E-Business Suite.

    Read the article

  • Oracle Access Manager 10gR3 Certified with E-Business Suite

    - by Keith M. Swartz
    Oracle Access Manager 10gR3 (10.1.4.3) is now certified for use with E-Business Suite Releases 11.5.10 and 12.1, using the new component, Oracle E-Business Suite AccessGate. For information on how to obtain, install, and configure this new component, see:Integrating Oracle E-Business Suite with Oracle Access Manager using Oracle E-Business Suite AccessGate (Note 975182.1) About Oracle Access Manager Oracle Access Manager is Oracle's next-generation identity and access management platform, and is a key component in Oracle's Fusion Middleware Identity Management solution. It provides a set of authentication and authorization features, including support for single sign-on authentication, and integration with other identity management offerings such as Oracle Identity Federation and Oracle Adaptive Access Manager.

    Read the article

  • Print Any Document Type with AutoVue Document Print Services

    - by [email protected]
    The newly released AutoVue Document Print Services allow development organizations to automate and process high volume printing operations, of both business and technical document types, within their broader enterprise applications. For many organizations, their printing processes are challenged by the fact that they can only print a small subset of the documents required by their enterprise users. By integrating AutoVue Document Print Services, and deploying them in conjunction with their existing print server solutions, organizations can address that challenge and automate the printing of virtually any document type required in any business process, greatly extending the value of their print server solutions, and improving business processes and workforce productivity. For further details, check out the AutoVue Document Print Services datasheet.

    Read the article

  • Dependent on CVS tagging for automated builds

    - by OMG Ponies
    My current work relies on using tags in CVS for an automated build process (ANT currently) to build for respective environments (development, QA, production). From our research, neither Git or Subversion support tagging in the same manner. If we use Subversion or Git, they don't support tags (in the same manner - please correct me?). So how would ANT or Maven know what to pick up for the respective build? Example: For a webapp, when viewing our repository say for the web.xml file -- the history would look like: web.xml v1 ... web.xml v1.2.3 Tag: Prod web.xml v1.2.4 web.xml v1.2.5 Tag: QA web.xml v1.2.6 web.xml v1.2.7 Head The ANT build scripts are run as CRON jobs, at different times & intervals for different environments. The environment build is based on the repository checkout, based on the tag. Development continues, and eventually the respective tags are moved: web.xml v1 ... web.xml v1.2.3 web.xml v1.2.4 web.xml v1.2.5 web.xml v1.2.6 Tag: Prod web.xml v1.2.7 Tag: QA web.xml v1.2.8 Head

    Read the article

  • Identifying cause of Software opening hundreds of thousands of handles

    - by user428370
    I have a user running the latest public revision of IBM Notes (8.5.4) on a Windows 7 Pro OA installation. Under his profile and only his, when the software is run, the software opens hundreds of thousands of handles, regarding token queries. This involves only this specific user, on his computer, using only said software. I have narrowed down the issue to being related to his profile but am unsure of what to look at next. Any advice would be appreciated.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • FIX adapter for StreamInsight

    - by Roman Schindlauer
    Over the last couple of month, Rapid Addition, a leading FIX and FAST solutions provider for the financial services industry, has been working closely with the StreamInsight team to enable StreamInsight Complex Event Processing queries to receive input feeds from Rapid Addition’s FIX engine and to send result events back into FIX. Earlier today, Toby Corballis from Rapid Addition blogged about these capabilities here on HedgeHogs. We are very excited to demonstrate these capabilities at the SIFMA conference in New York. The session will take place tomorrow, Tuesday, 11am – 12noon, at the Hilton Hotel New York, 1335 Avenue of the Americas, East Suite 4th floor. Torsten Grabs from the StreamInsight team will join the RapidAddition and local Microsoft teams for the session.  If you are interested in attending the session please register at http://bit.ly/c0bbLL. We are looking forward to meeting you tomorrow at SIFMA! Best regards,The StreamInsight Team

    Read the article

  • Upgrade problem - "dependency problems prevent configuration of libnih-dbus1"

    - by raycho
    I have a problem with the upgrading.... When i write sudo dpkg --configure -a , this is what happens... : dependency problems prevent configuration of libnih-dbus1: libnih-dbus1 depends on libnih1 (= 1.0.3-4ubuntu9); however: Version of libnih1 on system is 1.0.3-4ubuntu2. libnih-dbus1 depends on libc6 (>= 2.3.4); however: Package libc6 is not installed. dpkg: error processing libnih-dbus1 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libnih-dbus1 Please help

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Is it possible to skip an LTS upgrade?

    - by Jo-Erlend Schinstad
    If you want to upgrade from 10.10 to 12.04, then you'll need to follow the upgrade path; 10.10 11.04 11.10 12.04. However, with LTS versions, you can upgrade directly, so that you can upgrade from 10.04LTS to 12.04LTS directly. But now, LTS versions are supported for five years while there's still a new LTS every two years. That means you can choose to skip an LTS. So the question is; will I be able to upgrade from 12.04LTS to 16.04LTS directly, or will I then have to follow the LTS upgrade path; 12.04 14.04 16.04?

    Read the article

  • How do you manage frequent software releases to multiple clients?

    - by meeech
    hi we have a cross-platform middleware product which we typically end up customizing/bug fixing on a per client basis. In some cases, providing updates as often as once/twice per week. We have a lot of trouble efficiently managing and releasing the updates to our clients. I've done some digging, but I can't find anything to specifically address this problem. Can anyone share their experiences - how do you deal with this scenario, or do you know of a good software delivery cms? thanks

    Read the article

  • Reminder: ATG Live Webcast Feb. 24th: Using the R12 EBS Adapter

    - by Bill Sawyer
    Reminder: Our next ATG Live Webcast is happening on 24-Feb. The event is titled:E-Business Suite R12.x SOA Using the E-Business Suite AdapterThis live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter. While primarily focused on integrators and developers, understanding SOA capabilities is important for all E-Business Suite technologists and superusers.

    Read the article

  • How can I use Outlook 2007 to connect to mail service using active sync protocol

    - by Dan
    Has anyone tried connected outlook(I am running v2007 on Windows 7) to a mail service using the MS Exchange Active Sync protocol? If so, how did you do it? Wouldn't this solution probably eliminate the need for all the hacks/APIs needed to connect outlook to 'X' mail (Gmail, notes, etc)? I know it is intended for mobile devices, but to me it looks like it is becoming the latest 'de facto' mail protocol for email/calendar/contact sync'ing due to the iPhone's support for it. Thanks!

    Read the article

  • What is the Policy for Updating grub-pc in LTS?

    - by nutznboltz
    After having attempted an SRU request for a small patch to grub-pc for 10.04 LTS that fixes an issue that has left many people with unbootable servers nothing has been done for some time. The patch went through d-i into Maverick and only modifies a small amount of a single *.c file over a year ago. I updated the ticket with a debdiff of the patch and rebuilt the current grub-pc package plus the patch in a PPA and changed the ticket description to be the SRU request format subscribed the "sru" and "sponsors" teams but then nothing happened. Is there some policy against updating grub-pc in LTS that I don't know about? Thanks.

    Read the article

  • What's up with all the updates? [closed]

    - by Bob Babb
    I use Ubuntu exclusively for my job, especially for the fact that everything works and I get the most out of my processor and memory, but you are killing me with updates! I just lost a very good opportunity from a client that installed Ubuntu but got tired of all the updates. I really can't argue the fact. In a matter of a day I had two software updates. Quote from customer: "It's sad that I come in at 6:00 in the morning to install updates from a LTS version, and then before I leave at the end of the day I have 19 new updates to install. At least Microsoft bundles them in controllable groups." Sadly I have to agree, guys you have to do something about this. Please!

    Read the article

  • Where can I find a list of local japanese game publishers

    - by Erik
    What would be a good starting point for locating small or medium size game publishers in Japan. We have a US/EU released game that we believe will fit the Japanese market well, and are looking for companies that we could contact for possible co-publishing. EDIT: So far, I've found http://www.gamebusiness.jp/directory/category.php?id=10002 and from one answer, http://www.gamedevmap.com/index.php?query=Japan&Submit=Search Starting a bounty, I need as many publishers as I can get.

    Read the article

  • Solution to tack things up on top or to the side of a monitor

    - by Bela
    I'm trying to find some sort of product that would either go on the top or the side of an lcd monitor and give me space to tape/push-pin/post it note things for myself. For random notes and things I want to keep track of, having them on the top/side of my monitor would keep the space on my desk itself clear, and they would be closer to my field of vision. Does something like this exist? Do I need to rig up something myself?

    Read the article

  • 14.04 LTS Unity no longer boots after last 94 MB update

    - by Harryg123
    I am running 14.04 LTS, Unity, on an HP Pavillion 15, 4GBRAM, 750 GB hdd, I-5 machine, with AMD 8600M graphics card built in. I have disabled the dash and all Ubuntu spyware. I have been faithfully loading all updates as they appear. This morning it asked for a 94 MB update (bringing kernel to .27, I think. Now, I can boot, get to login screen, but it freezes after that. Keyboard doesn't work at that point, but mouse does. I booted into recovery mode, tried to run in generic graphics mode, -- system again froze. I also pressed [esc] during boot, but saw nothing strange; then text disappeared and was replaced by login screen. I am not a hobbyist; this is a production machine and I have a lot of work to do today. Having a standard software update render my machine completely useless... sigh. Perhaps the simplest thing to do would be to revert to the previous configuration. How do I do that? I can boot into recovery mode. But I have no idea how to proceed. TIA for all help. -Harry

    Read the article

  • How do you update live web sites with code changes?

    - by Aaron Anodide
    I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful. I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way. Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company. ASP.NET web application developed on my computer at work Solution has 2 projects: Website (files) WebsiteLib (C#/dll) Using a Git repository Deployed on a GoGrid 2008R2 web server Deployment: Make code changes. Push to Git. Remote desktop to server. Pull from Git. Overwrite the live files by dragging/dropping with windows explorer. In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy... UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons: Web Project = whole site packaged into a single DLL - downside for me I can't push simple updates - being a lone developer in a company of 50, this remains something that is simpler at times. Pulling straight from SCM into web root of site - i originally didn't do this out of fear that my SCM hidden directory might end up being exposed, but the answers here helped me get over that (although i still don't like having one more thing to worry about forgetting to make sure is still true over time) Using a web farm, and systematically deploying to nodes - this is the ideal solution for zero downtime, which is actually something I care about since the site is essentially a real time revenue source for my company - i might have a hard time convincing them to double the cost of the servers though. -- finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers. UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution). The process I use is: Make code changes Push to Git Remote desktop to server Pull from Git Run the following batch script: cd C:\Users\Administrator %systemroot%\system32\inetsrv\appcmd.exe stop site "/site.name:Default Web Site" robocopy Documents\code\da\1\work\Tree\LendingTreeWebSite1 c:\inetpub\wwwroot /E /XF connectionsconfig Web.config %systemroot%\system32\inetsrv\appcmd.exe start site "/site.name:Default Web Site" As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable. Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to. The documentation for AppCmd.exe is here. The documentation for Robocopy is here.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >