Search Results

Search found 4461 results on 179 pages for 'availability groups'.

Page 35/179 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • ORACLE UNIVERSITY

    - by mseika
    Expert Seminar in Dubai: How to choose your High-Availability solutions with Piet de Visser Oracle University's Expert Seminars are delivered by the best Oracle Gurus in the industry from all over the world. These unique and informative seminars are designed to provide you with expert insight in your area of interest. Piet de Visser is delivering the Expert Seminar ‘How to choose your High-Availability solutions’ on 8 November October in Dubai. You can find more information here. Please note: Your OPN discount is applied to the standard price shown on the website. For assistance with bookings contact Oracle University: eMail: [email protected] Telephone: +971 4 39 09 050

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Oracle OpenWorld Update -- Highly Available WebLogic Messaging Architectures: Sharing a Customer Experience with Comcast

    - by Ruma Sanyal
    This session will describe a Comcast’s hands-on  experience using WebLogic JMS as their high-performance enterprise messaging system including high availability, and disaster recovery capabilities as Comcast is rolling out a cross-site active-active message bus. In the session, we will cover the following: Key capabilities in WebLogic JMS that enabled Comcast to design such an architecture Details of the architecture put in place Details about application design needed to make all of this successful Failover and fail back processes The results from this new architecture are higher availability, better performance, more flexibility, and reduced costs through better utilization of hardware and improved manageability. For more information about this and other WebLogic sessions, review the Oracle WebLogic Focus On document here. Details: Tuesday, Oct 2, 5-6pm, Moscone South Room 306

    Read the article

  • Partner Webcast – Oracle Coherence Applications on WebLogic 12c Grid - 21st Nov 2013

    - by Roxana Babiciu
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees.The latest release of Oracle Coherence 12c comes with great improvements in ease of use, integration and RASP (Reliability, Availability, Scalability, and Performance) areas. In addition it features an innovating approach to build and deploy Coherence Application as an integral part of typical JEE Enterprise Application.Read more here

    Read the article

  • New Solaris Cluster!

    - by Jeff Victor
    We released Oracle Solaris Cluster 4.1 recently. OSC offers both High Availability (HA) and also Scalable Services capabilities. HA delivers automatic restart of software on the same cluster node and/or automatic failover from a failed node to a working cluster node. Software and support is available for both x86 and SPARC systems. The Scalable Services features manage multiple cluster nodes all providing a load-balanced service such as web servers or app serves. OSC 4.1 includes the ability to recover services from software failures, failure of hardware components such as DIMMs, CPUs, and I/O cards, a global file system, rolling upgrades, and much more. Oracle Availability Engineering posted a brief description and links to details. Or, you can just download it now!

    Read the article

  • Oracle Solaris Cluster 4.1 Released

    - by Larry Wake
    Today we announced the release of Oracle Solaris Cluster 4.1 ( download ; existing customers can just update from the package repository ).New capabilities include:  Oracle Solaris 10 Zone Clusters: The easiest way to update and consolidate existing Solaris 10 application environments is with Oracle Solaris 10 Zones within Oracle Solaris 11 -- not only do you get higher system utilization, but you can immediately leverage new features such as network virtualization.With Oracle Solaris Cluster 4.1, you can now cluster these zones, for even higher availability. Expanded disaster recovery operations: Oracle Solaris Cluster 4.1 introduces managed switchover and disaster-recovery takeover of applications and data using ZFS Storage Appliance replication services in a multi-site, multi-cluster configuration. Faster application recovery with improved storage failure detection and resource dependency management. Labeled security support for providing both high availability and high security, leveraging Oracle Solaris 11 Trusted Extensions. Learn more: Oracle Solaris Cluster at the Oracle Technology Network Data Sheet  What's New in Oracle Solaris Cluster 4.1  FAQs

    Read the article

  • Expression Studio 4 download

    Microsoft announced the general availability of Expression Studio 4 which includes upgraded versions of Expression Blend (including Sketchflow), Encoder, Web (including SuperPreview) and Design.

    Read the article

  • Substitute Items on Internal Sales Orders

    - by ChristineS-Oracle
    Oracle Order Management now enables you to substitute items on internal sales order lines to manage item availability.  Oracle Order Management enables you to substitute items on internal sales order lines to manage item availability. Source organizations can decide to ship a substitute item in case the original item is not available to be shipped. The application supports manual (using Related Items window) and automatic (using ATP functionality) substitutions.To substitute an item on ISO, you must ensure that the value of the Item Substitution on Internal Order system parameter is set to a value other than None. In addition, you must ensure to define substitute item relationships and automatic item substitution setup in the system. The application provides the option to not send the notifications when any change happens on the ISO related to quantity, schedule arrival date, or item. You can control these notifications using the OM: Send Notifications of Internal Order Change profile option. For additional information refer to the Oracle Order Management Release Notes for Release 12.2.4 (Doc ID 1906521.1).

    Read the article

  • Silverlight 4 What Devs Need to Know

    Tim Heuer has done a great post on the Silverlight 4 released. Availability of tools announcement. BEFORE you run off to Tims blog please READ THIS If you need to continue doing Windows Phone 7 development, stick with the Visual Studio 2010 Release Candidate for now!  The updated CTP of the Windows Phone developer tools is not quite done yet.  Information about updated tools availability will be forthcoming on these tools.  Stay tuned. When you visit Tims blog you will be prompted...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using named_scopes on the join model of a has_many :through

    - by uberllama
    Hi folks. I've been beating my head against the wall on something that on the surface should be very simple. Lets say I have the following simplified models: user.rb has_many :memberships has_many :groups, :through => :memberships membership.rb belongs_to :group belongs_to :user STATUS_CODES = {:admin => 1, :member => 2, :invited => 3} named_scope :active, :conditions => {:status => [[STATUS_CODES[:admin], STATUS_CODES[:member]]} group.rb has_many :memberships has_many :users, :through => :memberships Simple, right? So what I want to do is get a collection of all the groups a user is active in, using the existing named scope on the join model. Something along the lines of User.find(1).groups.active. Obviously this doesn't work. But as it stands, I need to do something like User.find(1).membrships.active.all(:include => :group) which returns a collection of memberships plus groups. I don't want that. I know I can add another has_many on the User model with conditions that duplicate the :active named_scope on the Membership model, but that's gross. has_many :active_groups, :through => :memberships, :source => :group, :conditions => ... So my question: is there a way of using intermediary named scopes when traversing directly between models? Many thanks.

    Read the article

  • ASP MVC C#: LINQ Foreign Key Constraint conflicts

    - by wh0emPah
    I'm having a problem with LINQ. I have 2 tables (Parent-child relation) Table1: Events (EventID, Description) Table2: Groups (GroupID, EventID(FK), Description) Now i want to create an Event an and a child. Event e = new Event(); e.Description = "test"; Datacontext.Events.InsertOnSubmit(event) Group g = new Group(); g.Description = "test2"; g.EventID = e.EventID; Datacontext.Groups.InsertOnSubmit(g); Datacontext.SubmitChanges(); When i debug, i can see that after inserting the event. the EventID has gotten a new value (auto increment). But when Datacontext.SubmitChanges(); gets called. I get the following exception "The INSERT statement conflicted with the FOREIGN KEY constraint ... I know this can be solved by creating a relation in the LINQ diagram between Events and groups. And then setting the entity itself. But i don't want to load the events everytime i ask a list of groups. All i need is some way that when inserting the group fails, the event insert won't be comitted in the database. Sorry if this is a bit unclear, My english isn't really good. Thanks in advance!

    Read the article

  • PackageMaker install script for rxtx

    - by vinzenzweber
    I am using PackageMaker to create an installer for my application. During installation I need to run a bash script to properly install rxtx, a JNI library for serial port communication. This library needs to have the directory /var/lock in place with user "root" and group "uucp". The installation script also needs to add the current user to the group "uucp" for the lib to be able to write to /var/lock. Now when I run my application installer the preinstall script is run as root. Therefore "whoami" returns root instead of the user who is actually running the installer. The result is that rxtx is not able to create lock files in /var/lock because the actual user was not added as a member to "uucp". How can I get the user while my script is run by the installer. Or is it better to set the permissions for /var/lock to a different group maybe? Any suggestions are welcome! !/bin/sh curruser=whoami logger "Setting permissions for /var/lock for user $curruser!" if [ ! -d /var/lock ] then logger "Creating /var/lock!" sudo mkdir /var/lock fi sudo chgrp uucp /var/lock sudo chmod 775 /var/lock MacOSX 10.5 and later use dscl if [ sudo dscl . -read /Groups/uucp GroupMembership | grep $curruser | wc -l = "0" ] then logger "Add user $curruser to /Groups/uucp!" sudo dscl . -append /Groups/uucp GroupMembership $curruser # to revert use: # sudo dscl . -delete /Groups/uucp GroupMembership $curruser else logger "GroupMembership of /var/lock not changed!" fi

    Read the article

  • FIlling a Java Bean tree structure from a csv flat file

    - by Clem
    Hi, I'm currently trying to construct a list of bean classes in Java from a flat description file formatted in csv. Concretely : Here is the structure of the csv file : MES_ID;GRP_PARENT_ID;GRP_ID;ATTR_ID M1 ; ;G1 ;A1 M1 ; ;G1 ;A2 M1 ;G1 ;G2 ;A3 M1 ;G1 ;G2 ;A4 M1 ;G2 ;G3 ;A5 M1 ; ;G4 ;A6 M1 ; ;G4 ;A7 M1 ; ;G4 ;A8 M2 ; ;G1 ;A1 M2 ; ;G1 ;A2 M2 ; ;G2 ;A3 M2 ; ;G2 ;A4 It corresponds to the hierarchical data structure : M1 ---G1 ------A1 ------A2 ------G2 ---------A3 ---------A4 ---------G3 ------------A5 ------G4 ---------A7 ---------A8 M2 ---G1 ------A1 ------A2 ---G2 ------A3 ------A4 Remarks : A message M can have an infinite number of groups G and attributes A A group G can have an infinite number of attributes and an infinite number of under-groups each of them having under-groups too That beeing said, I'm trying to read this flat csv decription to store it in this structure of beans : Map<String, MBean> messages = new HashMap<String, Mbean>(); == public class MBean { private String mes_id; private Map<String, GBean> groups; } public class GBean { private String grp_id; private Map<String, ABean> attributes; private Map<String, GBean> underGroups; } public class ABean { private String attr_id; } Reading the csv file sequentially is ok and I've been investigating how to use recursion to store the description data, but couldn't find a way. Thanks in advance for any of your algorithmic ideas. I hope it will put you in the mood of thinking about this ... I've to admit that I'm out of ideas :s

    Read the article

  • Add new language to existing Xcode project localization

    - by leolobato
    Hey guys, I'm working on an existing Xcode 3.2.2 Universal iPhone OS project which is already localized for 4 languages (EN, IT, DE and FR). We are now adding a new language (JA) into this project. Each existing .lproj folder (en.lproj, it.lproj, de.lproj and fr.lproj) has almost 60 files - including PNGs, HTMLs and the Localizable.strings file. Each one of those files appear as localized groups inside Groups & Files in Xcode. They're spread all over the tree. If I right-click one of those groups (say, Localizable.strings) inside Xcode, Get Info, click on "Add Localization" and type "ja" - as the Xcode docs suggest, nothing happens. From what I read in this newgroup, it's possibly because of the way those folders are named. If they were named like English.lproj and Italian.lproj, this was supposed to work. So, for me to actually import a new language localized file into the existing group, I have to: Right-click the localized group file. Choose "Add Existing File". Select the corresponding file inside the ja.lproj folder. I'm about to get a new ja.lproj folder with those 60 localized files and would love to import them in the project in a way that doesn't involve searching for every single file in Groups & Trees and performing those steps... for every one of those 60 files. Is that possible? Is there a right (or better) way to import a new language into this Xcode project?

    Read the article

  • Regex to ensure group match doesn't end with a specific character

    - by AJ
    I'm having trouble coming up with a regular expression to match a particular case. I have a list of tv shows in about 4 formats: Name.Of.Show.S01E01 Name.Of.Show.0101 Name.Of.Show.01x01 Name.Of.Show.101 What I want to match is the show name. My main problem is that my regex matches the name of the show with a preceding '.'. My regex is the following: "^([0-9a-zA-Z\.]+)(S[0-9]{2}E[0-9]{2}|[0-9]{4}|[0-9]{2}x[0-9]{2}|[0-9]{3})" Some Examples: >>> import re >>> SHOW_INFO = re.compile("^([0-9a-zA-Z\.]+)(S[0-9]{2}E[0-9]{2}|[0-9]{4}|[0-9]{2}x[0-9]{2}|[0-9]{3})") >>> match = SHOW_INFO.match("Name.Of.Show.S01E01") >>> match.groups() ('Name.Of.Show.', 'S01E01') >>> match = SHOW_INFO.match("Name.Of.Show.0101") >>> match.groups() ('Name.Of.Show.0', '101') >>> match = SHOW_INFO.match("Name.Of.Show.01x01") >>> match.groups() ('Name.Of.Show.', '01x01') >>> match = SHOW_INFO.match("Name.Of.Show.101") >>> match.groups() ('Name.Of.Show.', '101') So the question is how do I avoid the first group ending with a period? I realize I could simply do: var.strip(".") However, that doesn't handle the case of "Name.Of.Show.0101". Is there a way I could improve the regex to handle that case better? Thanks in advance.

    Read the article

  • Amazon Product API: "Your request is missing a required parameter combination" on Blended ItemSearch

    - by Daniel Schaffer
    I'm having some problems trying to do an ItemSearch on the Blended index using the Amazon Product API. According to the documentation, Blended requests cannot specify the MerchantId parameter - and indeed, if I try to include it I get an error telling me so. However, when I don't include it, I get an error telling me that my request is missing a required parameter combination and that a valid combination includes MerchantId... what the hell? Here's the XML response: <Items xmlns="http://webservices.amazon.com/AWSECommerceService/2005-10-05"> <Request> <IsValid>False</IsValid> <ItemSearchRequest> <Availability>Available</Availability> <Condition>All</Condition> <Keywords> home theater pc and other geekery</Keywords> <ResponseGroup>Similarities</ResponseGroup> <ResponseGroup>SalesRank</ResponseGroup> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>Small</ResponseGroup> <ResponseGroup>Images</ResponseGroup> <SearchIndex>Blended</SearchIndex> </ItemSearchRequest> <Errors> <Error> <Code>AWS.MissingParameterCombination</Code> <Message>Your request is missing a required parameter combination. Required parameter combinations include MerchantId, Availability.</Message> </Error> </Errors> </Request> </Items> The failing requests are being sent as part of batches with other requests that are succeeding. I'm using REST to send my requests, so here's an example of a request: http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=-------------& ItemSearch.1.Keywords=Mates%20of%20State& ItemSearch.1.MerchantId=Amazon& ItemSearch.1.SearchIndex=DVD& ItemSearch.2.Keywords=teaching%20Lily%20various%20computer%20related%20skills& ItemSearch.2.SearchIndex=Blended& ItemSearch.Shared.Availability=Available& ItemSearch.Shared.Condition=All& ItemSearch.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary%2CSimilarities& Operation=ItemSearch%2CSimilarityLookup& Service=AWSECommerceService& SimilarityLookup.1.ItemId=B000FNNHZ2& SimilarityLookup.2.ItemId=B000EQ5UPU& SimilarityLookup.Shared.Availability=Available& SimilarityLookup.Shared.Condition=All& SimilarityLookup.Shared.MerchantId=Amazon& SimilarityLookup.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary& Timestamp=2010-04-02T17%3A18%3A05Z& Signature=---------------- Any ideas as to what I'm doing wrong?

    Read the article

  • Having trouble creating vectors of System::String^

    - by Justen
    So I have a regex expression to parse certain parts of a file name. I'm trying to store each part in its own vector until I use it later, but it won't let me. One error I get when I try making a vector of System::String^ is that error C3698: 'System::String ^' : cannot use this type as argument of 'new' Then, when I try just making a vector of std::string, it can't implicitly convert to type System::String^, and casting won't work either. void parseData() { System::String^ pattern = "(\\D+)(\\d+)(\\w{1})(\\d+)\\.(\\D{3})"; std::vector < System::String^ > x, y, filename, separator; Regex re( pattern ); for ( int i = 0; i < openFileDialog1->FileNames->Length; i++ ) { Match^ m = re.Match( openFileDialog1->FileNames[i] ); filename.push_back( m->Groups[0]->Value );/* x.push_back( m->Groups[1]->Value ); separator.push_back( m->Groups[2]->Value ); y.push_back( m->Groups[3]->Value );*/ } }

    Read the article

  • MySQL - Structure for Permissions to Objects

    - by Kerry
    What would be an ideal structure for users permissions of objects. I've seen many related posts for general permissions, or what sections a user can access, which consists of a users, userGroups and userGroupRelations or something of that nature. In my system there are many different objects that can get created, and each one has to be able to be turned on or off. For instance, take a password manager that has groups and sub groups. Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 Group 10 Each group can contain a set of passwords. A user can be given read, write, edit and delete permissions to any group. More groups can get created at any point in time. If someone has permission to a group, I should be able to make him have permissions to all sub groups OR restrict it to just that group. My current thought is to have a users table, and then a permissions table with columns like: permission_id (int) PRIMARY_KEY user_id (int) INDEX object_id (int) INDEX type (varchar) INDEX read (bool) write (bool) edit (bool) delete (bool) This has worked in the past, but the new system I'm building needs to be able to scale rapidly, and I am unsure if this is the best structure. It also makes the idea of having someone with all subgroup permissions of a group more difficult. So, as a question, should I use the above structure? Or can someone point me in the direction of a better one?

    Read the article

  • Using a function found in a different file in a loop

    - by Anders
    This question is related to BuddyPress, and a follow-up question from this question I have a .csv-file with 790 rows and 3 columns where the first column is the group name, second is the group description and last (third) the slug. As far as I've been told I can use this code: <?php $groups = array(); if (($handle = fopen("groupData.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $group = array('group_id' = 'SOME ID', 'name' = $data[0], 'description' = $data[1], 'slug' = groups_check_slug(sanitize_title(esc_attr($data[2]))), 'date_created' = gmdate( "Y-m-d H:i:s" ), 'status' = 'public' ); $groups[] = $group; } fclose($handle); } foreach ($groups as $group) { groups_create_group($group); } With http://www.nomorepasting.com/getpaste.php?pasteid=35217 which is called bp-groups.php. The thing is that I can't make it work. I've created a new file with the code written above called groupgenerator.php uploaded the .csv file to the same folder and opened groupgenerator.php in my browser. But, i get this error: Fatal error: Call to undefined function groups_check_slug() in What am I doing wrong?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >