Search Results

Search found 11403 results on 457 pages for 'logic named joe'.

Page 38/457 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • In-memory data structure that supports boolean querying

    - by sanity
    I need to store data in memory where I map one or more key strings to an object, as follows: "green", "blue" -> object1 "red", "yellow" -> object2 I need to be able to efficiently receive a list of objects, where the strings match some boolean criteria, such as: ("red" OR "green") AND NOT "blue" I'm working in Java, so the ideal solution would be an off-the-shelf Java library. I am, however, willing to implement something from scratch if necessary. Anyone have any ideas? I'd rather avoid the overhead of an in-memory database if possible, I'm hoping for something comparable in speed to a HashMap (or at least the same order of magnitude).

    Read the article

  • SQL - logical AND among multiple rows

    - by potrnd
    Hello, First of all sorry that I could not think of a more descriptive title. What I want to do is the following using only SQL: I have some lists of strings, list1, list2 and list3. I have a dataset that contains two interesting columns, A and B. Column A contains a TransactionID and column B contains an ItemID. Naturally, there can be multiple rows that share the same TransactionIDs. I need to catch those transactions that have at least one item ID that exists whithin each list (list1, list2 AND list3). I also need to count how many times does that happen for each transaction. I hope that makes enough sense, perhaps I will be able to explain it better with a clear head. Thanks in advance

    Read the article

  • How to keep the first result of a function from Prolog?

    - by zuhakasa
    I need to write a customized function that will be called many times by other fixed functions. In this function, at the first called time, it will return the total number of lines of a file. The second called time of this function, forward, will return the number of lines in small sections of this file. My question is how I keep the first returned result(total number of lines of a file) and use it for the next called times of my function. I need to write or declare any thing only in this function(not in the caller). Something like this: myFunction(Input, MyResult, FirstResult) :- calculateInputFunction(Input, Result), !, MyResult is Result, ... . The problem is, every time myFunction is called, it receives different Input and returns different MyResult. But I would like to keep the first MyResult to use for next called times of myFunction. How can I do that? Thanks very much for your answer in advance.

    Read the article

  • MYSQL question - AND or OR?

    - by U22199
    Which is a better way to select ans and quest from the table? SELECT * FROM tablename WHERE option='ans' OR option='quest'"; OR SELECT * FROM tablename WHERE option='ans' AND option='quest'"; Thanks so much!

    Read the article

  • What does ‘?’ stand for in SQL?

    - by user295189
    I have this SQL by a programmer: $sql = " INSERT INTO `{$database}`.`table` ( `my_id`, `xType`, `subType`, `recordID`, `textarea` ) VALUES ( {$my_id}, ?xType, ?subType, {$recordID}, ?areaText ) "; My question is why is he using ? before values? How do I see what values are coming in? I did echo and it shows ?xType as ?xType. No values. What does ? stand for in SQL?

    Read the article

  • How to create conditional If / Else logic in a BizTalk map.

    How to create conditional logic in a BizTalk map using out of the box functoids. Example takes in a Xml file containing Films and their receipts and create a destination file whose structure id dependent on the incoming data.  read moreBy BiZTech KnowDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • How to configure DNS Server on Fedora

    - by user863873
    I want to learn how to configure my home PC server into a web server with domain and host. My IP is 109.99.141.133 and now points to a phpinfo page host on my home server. My registed domain is: anunta-anunturi.ro I searched for a tutorial and I've read that I have to configure /etc/named.conf and the file sources for the new zone that I create. So, from the tutorials, my /etc/named.conf looks like this: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "anunta-anunturi.ro" IN { type master; file "/etc/anunta-anunturi.db"; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; My /etc/anunta-anunturi.db file looks like this — I'm not sure if this is okay, or if it's the easy one. $TTL 86400 anunta-anunturi.ro. IN SOA serveur.anunta-anunturi.ro. root.serveur.anunta-anunturi.ro. ( 1997022700 ; Serial 28800 ; Refresh 14400 ; Retry 3600000 ; Expire 86400 ) ; Minumun IN NS serveur.anunta-anunturi.ro. IN MX 10 mail.anunta-anunturi.ro. serveur.anunta-anunturi.ro. IN A 192.168.1.37 www.anunta-anunturi.ro. IN A 192.168.1.37 mail.anunta-anunturi.ro. IN A 192.168.1.37 Extra info: At home I receive internet from my ISP through a router. My home PC and server recieve their IP automatically from the router when I start/restart. In my local home network, my server receives the IP 192.168.1.37 from the router. When I enter 109.99.141.133 in my browser, it points to the rooter that forwards port 80 to local IP 192.168.1.37 (my home server) Questions: Are my two files good? What/where is my nameserver that I need to copy/paste to my top level domain (where I registered my domain: rotld.ro)?

    Read the article

  • Should I have one dll or multiple for Business Logic?

    - by Brian
    In my situation, my company services many types of customers. Almost every customer requires their own Business Logic. Of course, there will be a base layer that all business logic should inherit from. However, I'm going back and forth on architecting this--either in one dll for all customers or one dll for each. My biggest point of contention deals with upgrading the software. We have about 12 data entry personnel that work with 20 companies and it's critical that they have little down time. My concern is that if I deploy everything in one dll, I could introduce a bug in company A's logic while only intending to update Company B's logic. I believe I could reduce the risk if each company's logic had their own dll, so then, I could deploy Company B's update w/o harming Company A's. -- I will be the only one supporting this. That said, this also seems like a nightmare to manage 20 different .dll's -- that's for the BLL alone. I also need to create a View layer and ViewModel layer. So, potentially, I could have 20 (companies) * 3 (layers) which would equate to 60 .dll's. Thank You.

    Read the article

  • Is it abuse to put application/business logic inside of jQuery plugins?

    - by RenderIn
    Is it appropriate to create jQuery plugins which are specific to a single page? I've been creating generic plugins that are not tied to any context and contain no business logic, but some people I've talked to suggest that almost all javascript, including business logic and logic specific to a single page, should be inside of jQuery plugins. Is it appropriate to have a validateformXYZ plugin which validates a specific HTML form? I'm buying into jQuery 100% but am not sure if this is a misuse or not.

    Read the article

  • Abstract away a compound identity value for use in business logic?

    - by John K
    While separating business logic and data access logic into two different assemblies, I want to abstract away the concept of identity so that the business logic deals with one consistent identity without having to understand its actual representation in the data source. I've been calling this a compound identity abstraction. Data sources in this project are swappable and various and the business logic shouldn't care which data source is currently in use. The identity is the toughest part because its implementation can change per kind of data source, whereas other fields like name, address, etc are consistently scalar values. What I'm searching for is a good way to abstract the concept of identity, whether it be an existing library, a software pattern or just a solid good idea of some kind is provided. The proposed compound identity value would have to be comparable and usable in the business logic and passed back to the data source to specify records, entities and/or documents to affect, so the data source must be able to parse back out the details of its own compound ids. Data Source Examples: This serves to provide an idea of what I mean by various data sources having different identity implementations. A relational data source might express a piece of content with an integer identifier plus a language specific code. For example. content_id language Other Columns expressing details of content 1 en_us 1 fr_ca The identity of the first record in the above example is: 1 + en_us However when a NoSQL data source is substituted, it might somehow represent each piece of content with a GUID string 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 plus language code of a different standardization, And a third kind of data source might use just a simple scalar value. So on and so forth, you get the idea.

    Read the article

  • What is the logic behind Ubuntu's messy installation process?

    - by Swaahaa
    Before you get into the conclusion that I want to " DISCUSS " this question, I would like stop you with whole respect and state that- I want A Clear Cut Answer for this question. Ubuntu is such an awesome Operating System. Why do the user dread with fear before installing each and every application. I mean why not a platform in Ubuntu which JUST INSTALLS ANYTHING. I am happy when after a lot of SUDOS, TERMINALS and this and that I INSTALL something. But what is the logic behind such a mess.

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • Can the following Domain Entity contain logic for creating/deleting other entities?

    - by user702769
    a) As far as I understand it, in most cases Domain Model DM doesn't contain code for creating/deleting domain entities, but instead it is the job of layers ( ie service layer or UI layer ) on top of DM to create/delete domain entities? b) Domain entities are modelled after real world entities. Assuming particular real world entity being abstracted does have the functionality of creating/deleting other real world entities, then I assume the domain entity abstracting this real world entity could also contain logic for creating/deleting other entities? class RobotDestroyerCreator { ... void heavyThinking() { ... if(...) unitOfWork.registerDelete(robot); ... if(...) { var robotNew = new Robot(...); unitOfWork.registerNew(robotNew); { ... } } Thank you

    Read the article

  • How do you handle domain logic that spans multiple model objects in an ORM?

    - by duality_
    So I know that business logic should be placed in the model. But using an ORM it is not as clear where I should place code that handles multiple objects. E.g. let's say we have a Customer model which has a type of either sporty or posh and we wanted to customer.add_bonus() to every posh customer. Where would we do this? Do we create a new class to handle all this? If yes, where do we put it (alongside all the other model classes, but not subclass it from the ORM?)? I'm currently using django framework in python, so specific suggestions are even more wanted.

    Read the article

  • should I create a new class for a specific piece of business logic?

    - by Riz
    I have a Request class based on the same Entity in my Domain. It currently only has property definitions. I'd like to add a method for checking a duplicate Request which I'll call from my controller. Should I add a method called CheckDuplicate in the Request class? Would I be violating the SRP? The method will need to access a database context to check already existing requests. I'm thinking creating another class altogether for this logic that accepts a datacontext as part of its constructor. But creating a whole new class for just one method seems like a waste too. Any advice?

    Read the article

  • What's the difference between the BitTorrent clients named "BitTorrent" and "µTorrent"?

    - by Eric
    A similar question was asked but never really addressed the question, I think in part because of terminology confusion. So to be very clear: what's the difference between the two BitTorrent client, one named "BitTorrent" and the one named "µTorrent"? They look to have identical UIs, right down to the same checkboxes in the prefs dialogs. Why are there two programs with different names that appear to be identical? Is one superior to the other? (Are they different in any way?) Thanks.

    Read the article

  • Reloading Apache httpd on Windows - error 'No installed service named "Apache2.2".'

    - by user143228
    This is a variant of How do I restart Apache on Windows? "Apache -k restart" gives error "No installed service named "Apache2"; I have Apache httpd 2.2.22 on Windows, not running as a service (our product's uber-service starts httpd as a console app). I'm trying to get httpd to reload its configuration; Apache's Windows docs suggest httpd -k restart from any console window. When I do that, I get PS C:\apache\bin> .\httpd.exe -k restart [Mon Oct 29 14:06:56 2012] [error] (OS 2)The system cannot find the file specified. : No installed service named "Apache2.2". How do I convince it Apache's not running as a service?

    Read the article

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • Should this code/logic be included in Business Objects class or a separate class?

    - by aspdotnetuser
    I have created a small application which has a three tier architecture and I have business object classes to represent entities such as User, Orders, UserType etc. In these classes I have methods that are executed when the Constuctor method of, for example, User is called. These methods perform calculations and generate details that setup data for attributes that are part of each User object. Here is the structure for the project in Visual Studio: Here is some code from the business object class User.cs: Public Class User { public string Name { get; set; } public int RandomNumber { get; set; } etc public User { Name = GetName(); RandomNumber = GetRandomNumber(); } public string GetName() { .... return name; } public int GetRandomNumber() { ... return randomNumber; } } Should this logic be included in the Business Object classes or should it be included in a Utilities class of some kind? Or in the business rules?

    Read the article

  • Debian 5 server is randomly shutting down.

    - by revofreak
    My debian 5 vps is suffering from random shutdowns. I reinstalled it several times, the hosts moved me to a different physical box, check the install image and said everyone else also uses it and is fine. Heres the output from syslog Mar 27 00:19:19 noobintraining-1 -- MARK -- Mar 27 00:32:01 noobintraining-1 shutdown[18142]: shutting down for system halt Mar 27 00:32:06 noobintraining-1 init: Switching to runlevel: 0 Mar 27 00:32:06 noobintraining-1 xinetd[15907]: Exiting... Mar 27 00:32:07 noobintraining-1 named[15865]: received control channel command 'stop -p' Mar 27 00:32:07 noobintraining-1 named[15865]: shutting down: flushing changes Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on 127.0.0.1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on ::1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on ::#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 127.0.0.1#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 89.238.172.132#53 Mar 27 00:32:07 noobintraining-1 named[15865]: exiting Mar 27 00:32:07 noobintraining-1 exiting on signal 15 Any help is most appreciated!

    Read the article

  • Debian 5 is randomly shutting down.

    - by revofreak
    My debian 5 vps is suffering from random shutdowns. I reinstalled it several times, the hosts moved me to a different physical box, check the install image and said everyone else also uses it and is fine. Heres the output from syslog Mar 27 00:19:19 noobintraining-1 -- MARK -- Mar 27 00:32:01 noobintraining-1 shutdown[18142]: shutting down for system halt Mar 27 00:32:06 noobintraining-1 init: Switching to runlevel: 0 Mar 27 00:32:06 noobintraining-1 xinetd[15907]: Exiting... Mar 27 00:32:07 noobintraining-1 named[15865]: received control channel command 'stop -p' Mar 27 00:32:07 noobintraining-1 named[15865]: shutting down: flushing changes Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on 127.0.0.1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on ::1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on ::#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 127.0.0.1#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 89.238.172.132#53 Mar 27 00:32:07 noobintraining-1 named[15865]: exiting Mar 27 00:32:07 noobintraining-1 exiting on signal 15 Any help is most appreciated!

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >