Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 89/856 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • WCF Bidirectional serialization fails

    - by Gena Verdel
    I'm trying to take advantage of Bidirectional serialization of some relational Linq-2-Sql generated entity classes. When using Unidirectional option everything works just fine, bu the moment I add IsReferenceType=true, objects fail to get transported over the tcp binding. Sample code: Entity class: [Table(Name="dbo.Blocks")] [DataContract()] public partial class Block : INotifyPropertyChanging, INotifyPropertyChanged { private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty); private long _ID; private int _StatusID; private string _Name; private bool _IsWithControlPoints; private long _DivisionID; private string _SHAPE; private EntitySet<BlockByWorkstation> _BlockByWorkstations; private EntitySet<PlanningPointAppropriation> _PlanningPointAppropriations; private EntitySet<Neighbor> _Neighbors; private EntitySet<Neighbor> _Neighbors1; private EntitySet<Task> _Tasks; private EntitySet<PlanningPointByBlock> _PlanningPointByBlocks; private EntitySet<ControlPointByBlock> _ControlPointByBlocks; private EntityRef<Division> _Division; private bool serializing; #region Extensibility Method Definitions partial void OnLoaded(); partial void OnValidate(System.Data.Linq.ChangeAction action); partial void OnCreated(); partial void OnIDChanging(long value); partial void OnIDChanged(); partial void OnStatusIDChanging(int value); partial void OnStatusIDChanged(); partial void OnNameChanging(string value); partial void OnNameChanged(); partial void OnIsWithControlPointsChanging(bool value); partial void OnIsWithControlPointsChanged(); partial void OnDivisionIDChanging(long value); partial void OnDivisionIDChanged(); partial void OnSHAPEChanging(string value); partial void OnSHAPEChanged(); #endregion public Block() { this.Initialize(); } [Column(Storage="_ID", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] [DataMember(Order=1)] public override long ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } [Column(Storage="_StatusID", DbType="Int NOT NULL")] [DataMember(Order=2)] public int StatusID { get { return this._StatusID; } set { if ((this._StatusID != value)) { this.OnStatusIDChanging(value); this.SendPropertyChanging(); this._StatusID = value; this.SendPropertyChanged("StatusID"); this.OnStatusIDChanged(); } } } [Column(Storage="_Name", DbType="NVarChar(255)")] [DataMember(Order=3)] public string Name { get { return this._Name; } set { if ((this._Name != value)) { this.OnNameChanging(value); this.SendPropertyChanging(); this._Name = value; this.SendPropertyChanged("Name"); this.OnNameChanged(); } } } [Column(Storage="_IsWithControlPoints", DbType="Bit NOT NULL")] [DataMember(Order=4)] public bool IsWithControlPoints { get { return this._IsWithControlPoints; } set { if ((this._IsWithControlPoints != value)) { this.OnIsWithControlPointsChanging(value); this.SendPropertyChanging(); this._IsWithControlPoints = value; this.SendPropertyChanged("IsWithControlPoints"); this.OnIsWithControlPointsChanged(); } } } [Column(Storage="_DivisionID", DbType="BigInt NOT NULL")] [DataMember(Order=5)] public long DivisionID { get { return this._DivisionID; } set { if ((this._DivisionID != value)) { if (this._Division.HasLoadedOrAssignedValue) { throw new System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException(); } this.OnDivisionIDChanging(value); this.SendPropertyChanging(); this._DivisionID = value; this.SendPropertyChanged("DivisionID"); this.OnDivisionIDChanged(); } } } [Column(Storage="_SHAPE", DbType="Text", UpdateCheck=UpdateCheck.Never)] [DataMember(Order=6)] public string SHAPE { get { return this._SHAPE; } set { if ((this._SHAPE != value)) { this.OnSHAPEChanging(value); this.SendPropertyChanging(); this._SHAPE = value; this.SendPropertyChanged("SHAPE"); this.OnSHAPEChanged(); } } } [Association(Name="Block_BlockByWorkstation", Storage="_BlockByWorkstations", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=7, EmitDefaultValue=false)] public EntitySet<BlockByWorkstation> BlockByWorkstations { get { if ((this.serializing && (this._BlockByWorkstations.HasLoadedOrAssignedValues == false))) { return null; } return this._BlockByWorkstations; } set { this._BlockByWorkstations.Assign(value); } } [Association(Name="Block_PlanningPointAppropriation", Storage="_PlanningPointAppropriations", ThisKey="ID", OtherKey="MasterBlockID")] [DataMember(Order=8, EmitDefaultValue=false)] public EntitySet<PlanningPointAppropriation> PlanningPointAppropriations { get { if ((this.serializing && (this._PlanningPointAppropriations.HasLoadedOrAssignedValues == false))) { return null; } return this._PlanningPointAppropriations; } set { this._PlanningPointAppropriations.Assign(value); } } [Association(Name="Block_Neighbor", Storage="_Neighbors", ThisKey="ID", OtherKey="FirstBlockID")] [DataMember(Order=9, EmitDefaultValue=false)] public EntitySet<Neighbor> Neighbors { get { if ((this.serializing && (this._Neighbors.HasLoadedOrAssignedValues == false))) { return null; } return this._Neighbors; } set { this._Neighbors.Assign(value); } } [Association(Name="Block_Neighbor1", Storage="_Neighbors1", ThisKey="ID", OtherKey="SecondBlockID")] [DataMember(Order=10, EmitDefaultValue=false)] public EntitySet<Neighbor> Neighbors1 { get { if ((this.serializing && (this._Neighbors1.HasLoadedOrAssignedValues == false))) { return null; } return this._Neighbors1; } set { this._Neighbors1.Assign(value); } } [Association(Name="Block_Task", Storage="_Tasks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=11, EmitDefaultValue=false)] public EntitySet<Task> Tasks { get { if ((this.serializing && (this._Tasks.HasLoadedOrAssignedValues == false))) { return null; } return this._Tasks; } set { this._Tasks.Assign(value); } } [Association(Name="Block_PlanningPointByBlock", Storage="_PlanningPointByBlocks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=12, EmitDefaultValue=false)] public EntitySet<PlanningPointByBlock> PlanningPointByBlocks { get { if ((this.serializing && (this._PlanningPointByBlocks.HasLoadedOrAssignedValues == false))) { return null; } return this._PlanningPointByBlocks; } set { this._PlanningPointByBlocks.Assign(value); } } [Association(Name="Block_ControlPointByBlock", Storage="_ControlPointByBlocks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=13, EmitDefaultValue=false)] public EntitySet<ControlPointByBlock> ControlPointByBlocks { get { if ((this.serializing && (this._ControlPointByBlocks.HasLoadedOrAssignedValues == false))) { return null; } return this._ControlPointByBlocks; } set { this._ControlPointByBlocks.Assign(value); } } [Association(Name="Division_Block", Storage="_Division", ThisKey="DivisionID", OtherKey="ID", IsForeignKey=true, DeleteOnNull=true, DeleteRule="CASCADE")] public Division Division { get { return this._Division.Entity; } set { Division previousValue = this._Division.Entity; if (((previousValue != value) || (this._Division.HasLoadedOrAssignedValue == false))) { this.SendPropertyChanging(); if ((previousValue != null)) { this._Division.Entity = null; previousValue.Blocks.Remove(this); } this._Division.Entity = value; if ((value != null)) { value.Blocks.Add(this); this._DivisionID = value.ID; } else { this._DivisionID = default(long); } this.SendPropertyChanged("Division"); } } } public event PropertyChangingEventHandler PropertyChanging; public event PropertyChangedEventHandler PropertyChanged; protected virtual void SendPropertyChanging() { if ((this.PropertyChanging != null)) { this.PropertyChanging(this, emptyChangingEventArgs); } } protected virtual void SendPropertyChanged(String propertyName) { if ((this.PropertyChanged != null)) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } private void attach_BlockByWorkstations(BlockByWorkstation entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_BlockByWorkstations(BlockByWorkstation entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_PlanningPointAppropriations(PlanningPointAppropriation entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_PlanningPointAppropriations(PlanningPointAppropriation entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_Neighbors(Neighbor entity) { this.SendPropertyChanging(); entity.FirstBlock = this; } private void detach_Neighbors(Neighbor entity) { this.SendPropertyChanging(); entity.FirstBlock = null; } private void attach_Neighbors1(Neighbor entity) { this.SendPropertyChanging(); entity.SecondBlock = this; } private void detach_Neighbors1(Neighbor entity) { this.SendPropertyChanging(); entity.SecondBlock = null; } private void attach_Tasks(Task entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_Tasks(Task entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_PlanningPointByBlocks(PlanningPointByBlock entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_PlanningPointByBlocks(PlanningPointByBlock entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_ControlPointByBlocks(ControlPointByBlock entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_ControlPointByBlocks(ControlPointByBlock entity) { this.SendPropertyChanging(); entity.Block = null; } private void Initialize() { this._BlockByWorkstations = new EntitySet<BlockByWorkstation>(new Action<BlockByWorkstation>(this.attach_BlockByWorkstations), new Action<BlockByWorkstation>(this.detach_BlockByWorkstations)); this._PlanningPointAppropriations = new EntitySet<PlanningPointAppropriation>(new Action<PlanningPointAppropriation>(this.attach_PlanningPointAppropriations), new Action<PlanningPointAppropriation>(this.detach_PlanningPointAppropriations)); this._Neighbors = new EntitySet<Neighbor>(new Action<Neighbor>(this.attach_Neighbors), new Action<Neighbor>(this.detach_Neighbors)); this._Neighbors1 = new EntitySet<Neighbor>(new Action<Neighbor>(this.attach_Neighbors1), new Action<Neighbor>(this.detach_Neighbors1)); this._Tasks = new EntitySet<Task>(new Action<Task>(this.attach_Tasks), new Action<Task>(this.detach_Tasks)); this._PlanningPointByBlocks = new EntitySet<PlanningPointByBlock>(new Action<PlanningPointByBlock>(this.attach_PlanningPointByBlocks), new Action<PlanningPointByBlock>(this.detach_PlanningPointByBlocks)); this._ControlPointByBlocks = new EntitySet<ControlPointByBlock>(new Action<ControlPointByBlock>(this.attach_ControlPointByBlocks), new Action<ControlPointByBlock>(this.detach_ControlPointByBlocks)); this._Division = default(EntityRef<Division>); OnCreated(); } [OnDeserializing()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnDeserializing(StreamingContext context) { this.Initialize(); } [OnSerializing()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnSerializing(StreamingContext context) { this.serializing = true; } [OnSerialized()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnSerialized(StreamingContext context) { this.serializing = false; } } App.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service behaviorConfiguration="debugging" name="DBServicesLibrary.DBService"> </service> </services> <behaviors> <serviceBehaviors> <behavior name="DBServicesLibrary.DBServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> <behavior name="debugging"> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Host part: ServiceHost svh = new ServiceHost(typeof(DBService)); svh.AddServiceEndpoint( typeof(DBServices.Contract.IDBService), new NetTcpBinding(), "net.tcp://localhost:8000"); Client part: ChannelFactory<DBServices.Contract.IDBService> scf; scf = new ChannelFactory<DBServices.Contract.IDBService>(new NetTcpBinding(),"net.tcp://localhost:8000"); _serv = scf.CreateChannel(); ((IContextChannel)_serv).OperationTimeout = new TimeSpan(0, 5, 0);

    Read the article

  • Retrieve EF4 POCOs using WCF REST services starter kit

    - by muruge
    I am using WCF REST service (GET method) to retrieve my EF4 POCOs. The service seem to work just fine. When I query the uri in my browser I get the results as expected. In my client application I am trying to use WCF REST Starter Kit's HTTPExtension method - ReadAsDataContract() to convert the result back into my POCO. This works fine when the POCO's navigation property is a single object of related POCO. The problem is when the navigation property is a collection of related POCOs. The ReadAsDataContract() method throws an exception with message "Object reference not set to an instance of an object." Below are my POCOs. [DataContract(Namespace = "", Name = "Trip")] public class Trip { [DataMember(Order = 1)] public virtual int TripID { get; set; } [DataMember(Order = 2)] public virtual int RegionID { get; set; } [DataMember(Order = 3)] public virtual System.DateTime BookingDate { get; set; } [DataMember(Order = 4)] public virtual Region Region { // removed for brevity } } [DataContract(Namespace = "", Name = "Region")] public class Region { [DataMember(Order = 1)] public virtual int RegionID { get; set; } [DataMember(Order = 2)] public virtual string RegionCode { get; set; } [DataMember(Order = 3)] public virtual FixupCollection<Trip> Trips { // removed for brevity } } [CollectionDataContract(Namespace = "", Name = "{0}s", ItemName = "{0}")] [Serializable] public class FixupCollection<T> : ObservableCollection<T> { protected override void ClearItems() { new List<T>(this).ForEach(t => Remove(t)); } protected override void InsertItem(int index, T item) { if (!this.Contains(item)) { base.InsertItem(index, item); } } } And this is how I am trying retrieve a Region POCO. static void GetRegion() { string uri = "http://localhost:8080/TripService/Regions?id=1"; HttpClient client = new HttpClient(uri); using (HttpResponseMessage response = client.Get(uri)) { Region region; response.EnsureStatusIsSuccessful(); try { region = response.Content.ReadAsDataContract<Region>(); // this line throws exception because Region returns a collection of related trips Console.WriteLine(region.RegionName); } catch (Exception ex) { Console.WriteLine(ex.Message); } } } Would appreciate any pointers.

    Read the article

  • Haskell newbie on types

    - by garulfo
    I'm completely new to Haskell (and more generally to functional programming), so forgive me if this is really basic stuff. To get more than a taste, I try to implement in Haskell some algorithmic stuff I'm working on. I have a simple module Interval that implements intervals on the line. It contains the type data Interval t = Interval t t the helper function makeInterval :: (Ord t) => t -> t -> Interval t makeInterval l r | l <= r = Interval l r | otherwise = error "bad interval" and some utility functions about intervals. Here, my interest lies in multidimensional intervals (d-intervals), those objects that are composed of d intervals. I want to separately consider d-intervals that are the union of d disjoint intervals on the line (multiple interval) from those that are the union of d interval on d separate lines (track interval). With distinct algorithmic treatments in mind, I think it would be nice to have two distinct types (even if both are lists of intervals here) such as import qualified Interval as I -- Multilple interval newtype MInterval t = MInterval [I.Interval t] -- Track interval newtype TInterval t = TInterval [I.Interval t] to allow for distinct sanity checks, e.g. makeMInterval :: (Ord t) => [I.Interval t] -> MInterval t makeMInterval is = if foldr (&&) True [I.precedes i i' | (i, i') <- zip is (tail is)] then (MInterval is) else error "bad multiple interval" makeTInterval :: (Ord t) => [I.Interval t] -> TInterval t makeTInterval = TInterval I now get to the point, at last! But some functions are naturally concerned with both multiple intervals and track intervals. For example, a function order would return the number of intervals in a multiple interval or a track interval. What can I do? Adding -- Dimensional interval data DInterval t = MIntervalStuff (MInterval t) | TIntervalStuff (TInterval t) does not help much, since, if I understand well (correct me if I'm wrong), I would have to write order :: DInterval t -> Int order (MIntervalStuff (MInterval is)) = length is order (TIntervalStuff (TInterval is)) = length is and call order as order (MIntervalStuff is) or order (TIntervalStuff is) when is is a MInterval or a TInterval. Not that great, it looks odd. Neither I want to duplicate the function (I have many functions that are concerned with both multiple and track intevals, and some other d-interval definitions such as equal length multiple and track intervals). I'm left with the feeling that I'm completely wrong and have missed some important point about types in Haskell (and/or can't forget enough here about OO programming). So, quite a newbie question, what would be the best way in Haskell to deal with such a situation? Do I have to forget about introducing MInterval and TInterval and go with one type only? Thanks a lot for your help, Garulfo

    Read the article

  • XML Postback issue

    - by Mikey1980
    I have a script that is designed to parse XML postbacks from Ultracart, right now just dumps it into a MySQL table. The script works fine if I point it to a XML file on my localhost but using 'php://input' it doesn't seem to grabbing anything. My logs show apache returning 200 after the post so I have no idea what could be wrong or how to drill down the issue.. here's the code: $doc = new DOMDocument(); $doc->loadXML($page); $handle = fopen("test2/".time().".xml", "w+"); fwrite($handle,trim($page)); // it doesn't save this either :'( fclose(); require_once('includes/database.php'); $db = new Database('localhost', 'user', 'password', 'db_name'); $data = array(); $exports = $doc->getElementsByTagName("export"); foreach ($exports as $export) { $orders = $export->getElementsByTagName("order"); foreach($orders as $order) { $data['order_id'] = $order->getElementsByTagName("order_id")->item(0)->nodeValue; $data['payment_status'] = $order->getElementsByTagName("payment_status")->item(0)->nodeValue; $date_array = explode(" ",$order->getElementsByTagName("payment_date_time")->item(0)->nodeValue); if ($date_array[1] == 'JAN') { $date_array[1] = '01'; } if ($date_array[1] == 'FEB') { $date_array[1] = '02'; } if ($date_array[1] == 'MAR') { $date_array[1] = '03'; } if ($date_array[1] == 'APR') { $date_array[1] = '04'; } if ($date_array[1] == 'MAY') { $date_array[1] = '05'; } // converts Ultracart date to if ($date_array[1] == 'JUN') { $date_array[1] = '06'; } // MySQL date if ($date_array[1] == 'JUL') { $date_array[1] = '07'; } if ($date_array[1] == 'AUG') { $date_array[1] = '08'; } if ($date_array[1] == 'SEP') { $date_array[1] = '09'; } if ($date_array[1] == 'OCT') { $date_array[1] = '10'; } if ($date_array[1] == 'NOV') { $date_array[1] = '11'; } if ($date_array[1] == 'DEC') { $date_array[1] = '12'; } $data['payment_date'] = $date_array[2]."-".$date_array[1]."-".$date_array[0]; $data['payment_time'] = $date_array[3]; //... we'll skip this, there are 80 some elements $data['discount'] = $order->getElementsByTagName("discount")->item(0)->nodeValue; $data['distribution_center_code'] = $order->getElementsByTagName("distribution_center_code")->item(0)->nodeValue; } } } $db->insert('order_history',$data); } else die('ERROR: Token Check Failed!');

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • how i can use SAX parser

    - by moustafa
    This is what the result should look like when i parse it through a SAX parser http://img13.imageshack.us/img13/6950/75914446.jpg This is the XML source code from which i need to generate the display: <orders> <order> <count>37</count> <price>49.99</price> <book> <isbn>0130897930</isbn> <title>Core Web Programming Second Edition</title> <authors> <count>2</count> <author>Marty Hall</author> <author>Larry Brown</author> </authors> </book> </order> <order> <count>1</count> <price>9.95</price> <yacht> <manufacturer>Luxury Yachts, Inc.</manufacturer> <model>M-1</model> <standardFeatures oars="plastic" lifeVests="none">false</standardFeatures> </yacht> </order> <order> <count>3</count> <price>22.22</price> <book> <isbn>B000059Z4H</isbn> <title>Harry Potter and the Order of the Phoenix</title> <authors> <count>1</count> <author>J.K. Rowling</author> </authors> </book> </order> i really have no clue how to code the functions but i have just set up the parser $xmlParser = xml_parser_create("UTF-8"); xml_parser_set_option($xmlParser, XML_OPTION_CASE_FOLDING, false); xml_set_element_handler($xmlParser, 'startElement', 'endElement'); xml_set_character_data_handler($xmlParser, 'HandleCharacterData'); $fileName = 'orders.xml'; if (!($fp = fopen($fileName, 'r'))){ die('Cannot open the XML file: ' . $fileName); } while ($data = fread($fp, 4096)){ $parsedOkay = xml_parse($xmlParser, $data, feof($fp)); if (!$parsedOkay){ print ("There was an error or the parser was finished."); break; } } xml_parser_free($xmlParser); function startElement($xmlParser, $name, $attribs) { } function endElement($parser, $name) { } function HandleCharacterData($parser, $data) { }

    Read the article

  • problem with Using SAX parser

    - by moustafa
    Hi guys i have this small class task that im having trouble with. I need to create a PHP file using SAX to generate the display shown below from an XML file. Im not sure how to Use | to represent its level, where the root element orders is at level zero. This is what the result should look like when i parse it through a SAX parser http://img13.imageshack.us/img13/6950/75914446.jpg This is the XML source code from which i need to generate the display: <orders> <order> <count>37</count> <price>49.99</price> <book> <isbn>0130897930</isbn> <title>Core Web Programming Second Edition</title> <authors> <count>2</count> <author>Marty Hall</author> <author>Larry Brown</author> </authors> </book> </order> <order> <count>1</count> <price>9.95</price> <yacht> <manufacturer>Luxury Yachts, Inc.</manufacturer> <model>M-1</model> <standardFeatures oars="plastic" lifeVests="none">false</standardFeatures> </yacht> </order> <order> <count>3</count> <price>22.22</price> <book> <isbn>B000059Z4H</isbn> <title>Harry Potter and the Order of the Phoenix</title> <authors> <count>1</count> <author>J.K. Rowling</author> </authors> </book> </order>

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • Can sorting Japanese kanji words be done programatically?

    - by Mason
    I've recently discovered, to my astonishment (having never really thought about it before), machine-sorting Japanese proper nouns is apparently not possible. I work on an application that must allow the user to select a hospital from a 3-menu interface. The first menu is Prefecture, the second is City Name, and the third is Hospital. Each menu should be sorted, as you might expect, so the user can find what they want in the menu. Let me outline what I have found, as preamble to my question: The expected sort order for Japanese words is based on their pronunciation. Kanji do not have an inherent order (there are tens of thousands of Kanji in use), but the Japanese phonetic syllabaries do have an order: ???????????????????... and on for the fifty traditional distinct sounds (a few of which are obsolete in modern Japanese). This sort order is called ???? (gojuu on jun , or '50-sound order'). Therefore, Kanji words should be sorted in the same order as they would be if they were written in hiragana. (You can represent any kanji word in phonetic hiragana in Japanese.) The kicker: there is no canonical way to determine the pronunciation of a given word written in kanji. You never know. Some kanji have ten or more different pronunciations, depending on the word. Many common words are in the dictionary, and I could probably hack together a way to look them up from one of the free dictionary databases, but proper nouns (e.g. hospital names) are not in the dictionary. So, in my application, I have a list of every prefecture, city, and hospital in Japan. In order to sort these lists, which is a requirement, I need a matching list of each of these names in phonetic form (kana). I can't come up with anything other than paying somebody fluent in Japanese (I'm only so-so) to manually transcribe them. Before I do so though: Is it possible that I am totally high on fire, and there actually is some way to do this sorting without creating my own mappings of kanji words to phonetic readings, that I have somehow overlooked? Is there a publicly available mapping of prefecture/city names, from the government or something? That would reduce the manual mapping I'd need to do to only hospital names. Does anybody have any other advice on how to approach this problem? Any programming language is fine--I'm working with Ruby on Rails but I would be delighted if I could just write a program that would take the kanji input (say 40,000 proper nouns) and then output the phonetic representations as data that I could import into my Rails app. ??????????

    Read the article

  • How to approach copying objects with smart pointers as class attributes?

    - by tomislav-maric
    From the boost library documentation I read this: Conceptually, smart pointers are seen as owning the object pointed to, and thus responsible for deletion of the object when it is no longer needed. I have a very simple problem: I want to use RAII for pointer attributes of a class that is Copyable and Assignable. The copy and assignment operations should be deep: every object should have its own copy of the actual data. Also, RTTI needs to be available for the attributes (their type may also be determined at runtime). Should I be searching for an implementation of a Copyable smart pointer (the data are small, so I don't need Copy on Write pointers), or do I delegate the copy operation to the copy constructors of my objects as shown in this answer? Which smart pointer do I choose for simple RAII of a class that is copyable and assignable? (I'm thinking that the unique_ptr with delegated copy/assignment operations to the class copy constructor and assignment operator would make a proper choice, but I am not sure) Here's a pseudocode for the problem using raw pointers, it's just a problem description, not a running C++ code: // Operation interface class ModelOperation { public: virtual void operate = (); }; // Implementation of an operation called Special class SpecialModelOperation : public ModelOperation { private: // Private attributes are present here in a real implementation. public: // Implement operation void operate () {}; }; // All operations conform to ModelOperation interface // These are possible operation names: // class MoreSpecialOperation; // class DifferentOperation; // Concrete model with different operations class MyModel { private: ModelOperation* firstOperation_; ModelOperation* secondOperation_; public: MyModel() : firstOperation_(0), secondOperation_(0) { // Forgetting about run-time type definition from input files here. firstOperation_ = new MoreSpecialOperation(); secondOperation_ = new DifferentOperation(); } void operate() { firstOperation_->operate(); secondOperation_->operate(); } ~MyModel() { delete firstOperation_; firstOperation_ = 0; delete secondOperation_; secondOperation_ = 0; } }; int main() { MyModel modelOne; // Some internal scope { // I want modelTwo to have its own set of copied, not referenced // operations, and at the same time I need RAII to work for it, // as soon as it goes out of scope. MyModel modelTwo (modelOne); } return 0; }

    Read the article

  • Tracking download of non-html (like pdf) downloads with jQuery and Google Analytics

    - by developerit
    Hi folks, it’s been quite calm at Developer IT’s this summer since we were all involved in other projects, but we are slowly comming back. In this post, we will present a simple way of tracking files download with Google Analytics with the help of jQuery. We work for a client that offers a lot of pdf files to download on their web site and wanted to know which one are the most popular. They use Google Analytics for a long time now and we did not want to have a second interface in order to present those stats to our client. So usign IIS logs was not a idea to consider. Since Google already offers us a splendid web interface and a powerful API, we deceided to hook up simple javascript code into the jQuery click event to notify Analytics that a pdf has been requested. (function ($) { function trackLink(e) { var url = $(this).attr('href'); //alert(url); // for debug purpose // old page tracker code pageTracker._trackPageview(url); // you can use the new one too _gaq.push(["_trackPageview",url]); //always return true, in order for the browser to continue its job return true; } // When DOM ready $(function () { // hook up the click event $('.pdf-links a').click(trackLink); }); })(jQuery); You can be more presice or even be sure not to miss one click by changing the selector which hooks up the click event. I have been usign this code to track AJAX requests and it works flawlessly.

    Read the article

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using FIRST _VALUE and LAST_VALUE

    - by pinaldave
    Last week we asked a puzzle SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY . This puzzle got very interesting participation. The details of the winner is listed here. In this puzzle we received two very important feedback. This puzzle cleared the concepts of First_Value and Last_Value to the participants. As this was based on SQL Server 2012 many could not participate it as they have yet not installed SQL Server 2012. I really appreciate the feedback of user and decided to come up something as fun and helps learn new feature of SQL Server 2012. Please read yesterday’s blog post SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 before continuing this puzzle as it is based on yesterday’s post. Yesterday I ran following query which uses functions LEAD and LAG. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Puzzle: Now use T-SQL Self Join where same table is joined to itself and get the same result without using LEAD or LAG functions. Hint: Introduction to JOINs – Basic of JOINs Self Join A new analytic functions in SQL Server Denali CTP3 – LEAD() and LAG() Rules Leave a comment with your detailed answer by Nov 21's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • Setting up apache to view https pages

    - by zac
    I am trying to set up a site using vmware workstation, ubuntu 11.10, and apache2. The site works fine but now the https pages are not showing up. For example if I try to go to https://www.mysite.com/checkout I just see the message Not Found The requested URL /checkout/ was not found on this server. I dont really know what I am doing and have tried a lot of things to get the ssl certificates in there right. A few things I have in there, in my httpd.conf I just have : ServerName localhost In my ports.conf I have : NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 http </IfModule> <IfModule mod_gnutls.c> Listen 443 http </IfModule> In the /etc/apache2/sites-available/default-ssl : <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> .... truncated in the sites-available/default I have : <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #DocumentRoot /home/magento/site/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <virtualhost *:443> SSLEngine on SSLCertificateFile /etc/apache2/ssl/server.crt SSLCertificateKeyFile /etc/apache2/ssl/server.key ServerAdmin webmaster@localhost <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </virtualhost> I also have in sites-availabe a file setup for my site url, www.mysite.com so in /etc/apache2/sites-available/mysite.com <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> <VirtualHost *:443> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> Thanks for any help getting this setup! As is probably obvious from this post I am pretty lost at this point.

    Read the article

  • mod_status: 403 Forbidden

    - by i.amniels
    I have configured mod_status like this in my Apache 2 config file: <Location /server-status> SetHandler server-status Order Deny,Allow </Location> I also tried: Order Deny,Allow allow from all And: Order Allow,Deny allow from all And: Order Deny,Allow Deny from all Allow from 145.xxx.xx.xx And of course I restarted Apache after each configuration change. I tried the configurations above in my main config file and under a VirtualHost directive. When I place the configuration under VirtualHost or in a NameVirtualHost block I get a 404 instead of 403. But I get only get 403 Forbidden when I visit example.com/server-status/ I don't use an .htaccess file. It works now! I placed the server-status code right above the definitions of the virtual hosts. I didn't test the right configuration on the right virtual host. On the virtual host without htaccess files it works, on virtual hosts with an app installed I get 404 because of url rewrites in the htaccess file.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • HTML5-MVC application using VS2010 SP1

    - by nmarun
    This is my first attempt at creating HTML5 pages. VS 2010 allows working with HTML5 now (you just need to make a small change after installing SP1). So my Razor view is now a HTML5 page. I call this application - 5Commerce – (an over-simplified) HTML5 ECommerce site. So here’s the flow of the application: home page renders user enters first and last name, chooses a product and the quantity can enter additional instructions for the order place the order user is then taken to another page showing the order details Off to the details. This is what my page looks in Google Chrome 10 beta (or later) soon after it renders. Here are some of the things to observe on this. Look a little closer and you’ll see a border around the first name textbox – this is ‘autofocus’ in action. I’ve set the autofocus attribute on this textbox. So as soon as the page loads, this control gets focus. 1: <input type="text" autofocus id="firstName" class="inputWidth" data_minlength="" 2: data_maxlength="" placeholder="first name" /> See a partially grayed out ‘last name’ text in the second textbox. This is set using a placeholder attribute (see above). It gets wiped out on-focus and improves the UI visuals in general. The quantity textbox is actually a numerical-only textbox. 1: <input type="number" id="quantity" data_mincount="" class="inputWidth" /> The last line is for additional instructions. This looks like a label but it’s content is editable. Just adding the ‘contenteditable’ attribute to the span allow the user to edit the text inside. 1: <span contenteditable id="additionalInstructions" data_texttype="" class="editableContent">select text and edit </span> All of the above is just plain HTML (no lurking javascript acting in here). Makes it real clean and simple. Going more into the HTML, I see that the _Layout.cshtml already is using some HTML5 content. I created my project before installing SP1, so that was the reason for my surprise. 1: <!DOCTYPE html> This is the doctype declaration in HTML5 and this is supported even by IE6 (just take my word on IE6 now, don’t go install it to test it, especially when MS is doing an IE6 countdown). That’s just amazing and extremely easy to read remember and talk about a few less bytes on every call! I modified the rest of my _Layout.cshtml to the below: 1: <!DOCTYPE html> 2: <html> 3: <head> 4: <title>5Commerce - HTML 5 Ecommerce site</title> 5: <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> 6: <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> 7: <script src="@Url.Content("~/Scripts/CustomScripts.js")" type="text/javascript"></script> 8: <script type="text/javascript"> 9: $(document).ready(function () { 10: WireupEvents(); 11: }); 12:</script> 13:  14: </head> 15:  16: <body role="document" class="bodybackground"> 17: <header role="heading"> 18: <h2>5Commerce - HTML 5 Ecommerce site!</h2> 19: </header> 20: <section id="mainForm"> 21: @RenderBody() 22: </section> 23: <footer id="page_footer" role="siteBaseInfo"> 24: <p>&copy; 2011 5Commerce Inc!</p> 25: </footer> 26: </body> 27: </html> I’m sure you’re seeing some of the new tags here. To give a brief intro about them: <header>, <footer>: Marks the header/footer region of a page or section. <section>: A logical grouping of content role attribute: Identifies the responsibility of an element. This attribute can be used by screen readers and can also be filtered through jQuery. SP1 also allows for some intellisense in HTML5. You see the other types of input fields – email, date, datetime, month, url and there are others as well. So once my page loads, i.e., ‘on document ready’, I’m wiring up the events following the principles of unobtrusive javascript. In the snippet below, I’m controlling the behavior of the input controls for specific events. 1: $("#productList").bind('change blur', function () { 2: IsSelectedProductValid(); 3: }); 4:  5: $("#quantity").bind('blur', function () { 6: IsQuantityValid(); 7: }); 8:  9: $("#placeOrderButton").click( 10: function () { 11: if (IsPageValid()) { 12: LoadProducts(); 13: } 14: }); This enables some client-side validation to occur before the data is sent to the server. These validation constraints are obtained through a JSON call to the WCF service and are set to the ‘data_’ attributes of the input controls. Have a look at the ‘GetValidators()’ function below: 1: function GetValidators() { 2: // the post to your webservice or page 3: $.ajax({ 4: type: "GET", //GET or POST or PUT or DELETE verb 5: url: "http://localhost:14805/OrderService.svc/GetValidators", // Location of the service 6: data: "{}", //Data sent to server 7: contentType: "application/json; charset=utf-8", // content type sent to server 8: dataType: "json", //Expected data format from server 9: processdata: true, //True or False 10: success: function (result) {//On Successfull service call 11: if (result.length > 0) { 12: for (i = 0; i < result.length; i++) { 13: if (result[i].PropertyName == "FirstName") { 14: if (result[i].MinLength > 0) { 15: $("#firstName").attr("data_minLength", result[i].MinLength); 16: } 17: if (result[i].MaxLength > 0) { 18: $("#firstName").attr("data_maxLength", result[i].MaxLength); 19: } 20: } 21: else if (result[i].PropertyName == "LastName") { 22: if (result[i].MinLength > 0) { 23: $("#lastName").attr("data_minLength", result[i].MinLength); 24: } 25: if (result[i].MaxLength > 0) { 26: $("#lastName").attr("data_maxLength", result[i].MaxLength); 27: } 28: } 29: else if (result[i].PropertyName == "Quantity") { 30: if (result[i].MinCount > 0) { 31: $("#quantity").attr("data_minCount", result[i].MinCount); 32: } 33: } 34: else if (result[i].PropertyName == "AdditionalInstructions") { 35: if (result[i].TextType.length > 0) { 36: $("#additionalInstructions").attr("data_textType", result[i].TextType); 37: } 38: } 39: } 40: } 41: }, 42: error: function (result) {// When Service call fails 43: alert('Service call failed: ' + result.status + ' ' + result.statusText); 44: } 45: }); 46:  47: //.... 48: } Just before the GetValidators() function runs and sets the validation constraints, this is what the html looks like (seen through the Dev tools of Chrome): After the function executes, you see the values in the ‘data_’  attributes. As and when we enter valid data into these fields, the error messages disappear, since the validation is bound to the blur event of the control. There you see… no error messages (well, the catch here is that once you enter THAT name, all errors disappear automatically). Clicking on ‘Place Order!’ runs the SaveOrder function. You can see the JSON for the order object that is getting constructed and passed to the WCF Service. 1: function SaveOrder() { 2: var addlInstructionsDefaultText = "select text and edit"; 3: var addlInstructions = $("span:first").text(); 4: if(addlInstructions == addlInstructionsDefaultText) 5: { 6: addlInstructions = ''; 7: } 8: var orderJson = { 9: AdditionalInstructions: addlInstructions, 10: Customer: { 11: FirstName: $("#firstName").val(), 12: LastName: $("#lastName").val() 13: }, 14: OrderedProduct: { 15: Id: $("#productList").val(), 16: Quantity: $("#quantity").val() 17: } 18: }; 19:  20: // the post to your webservice or page 21: $.ajax({ 22: type: "POST", //GET or POST or PUT or DELETE verb 23: url: "http://localhost:14805/OrderService.svc/SaveOrder", // Location of the service 24: data: JSON.stringify(orderJson), //Data sent to server 25: contentType: "application/json; charset=utf-8", // content type sent to server 26: dataType: "json", //Expected data format from server 27: processdata: false, //True or False 28: success: function (result) {//On Successfull service call 29: window.location.href = "http://localhost:14805/home/ShowOrderDetail/" + result; 30: }, 31: error: function (request, error) {// When Service call fails 32: alert('Service call failed: ' + request.status + ' ' + request.statusText); 33: } 34: }); 35: } The service saves this order into an XML file and returns the order id (a guid). On success, I redirect to the ShowOrderDetail action method passing the guid. This page will show all the details of the order. Although the back-end weightlifting is done by WCF, I did not show any of that plumbing-work as I wanted to concentrate more on the HTML5 and its associates. However, you can see it all in the source here. I do have one issue with HTML5 and this is an existing issue with HTML4 as well. If you see the snippet above where I’ve declared a textbox for first name, you’ll see the autofocus attribute just dangling by itself. It doesn’t follow the xml syntax of ‘key="value"’ allowing users to continue writing badly-formatted html even in the new version. You’ll see the same issue with the ‘contenteditable’ attribute as well. The work-around is that you can do ‘autofocus=”true”’ and it’ll work fine plus make it well-formatted. But unless the standards enforce this, there will be people (me included) who’ll get by, by just typing the bare minimum! Hoping this will get fixed in the coming version-updates. Source code here. Verdict: I think it’s time for us to embrace the new HTML5. Thank you HTML4 and Welcome HTML5.

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • A tale from a Stalker

    - by Peter Larsson
    Today I thought I should write something about a stalker I've got. Don't get me wrong, I have way more fans than stalkers, but this stalker is particular persistent towards me. It all started when I wrote about Relational Division with Sets late last year(http://weblogs.sqlteam.com/peterl/archive/2010/07/02/Proper-Relational-Division-With-Sets.aspx) and no matter what he tried, he didn't get a better performing query than me. But this I didn't click until later into this conversation. He must have saved himself for 9 months before posting to me again. Well... Some days ago I get an email from someone I thought i didn't know. Here is his first email Hi, I want a proper solution for achievement the result. The solution must be standard query, means no using as any native code like TOP clause, also the query should run in SQL Server 2000 (no CTE use). We have a table with consecutive keys (nbr) that is not exact sequence. We need bringing all values related with nearest key in the current key row. See the DDL: CREATE TABLE Nums(nbr INTEGER NOT NULL PRIMARY KEY, val INTEGER NOT NULL); INSERT INTO Nums(nbr, val) VALUES (1, 0),(5, 7),(9, 4); See the Result: pre_nbr     pre_val     nbr         val         nxt_nbr     nxt_val ----------- ----------- ----------- ----------- ----------- ----------- NULL        NULL        1           0           5           7 1           0           5           7           9           4 5           7           9           4           NULL        NULL The goal is suggesting most elegant solution. I would like see your best solution first, after that I will send my best (if not same with yours)   Notice there is no name, no please or nothing polite asking for my help. So, on the top of my head I sent him two solutions, following the rule "Work on SQL Server 2000 and only standard non-native code".     -- Peso 1 SELECT               pre_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.pre_nbr                              ) AS pre_val,                              d.nbr,                              d.val,                              d.nxt_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.nxt_nbr                              ) AS nxt_val FROM                (                                                           SELECT               (                                                                                                                     SELECT               MAX(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr < n.nbr                                                                                        ) AS pre_nbr,                                                                                        n.nbr,                                                                                        n.val,                                                                                        (                                                                                                                     SELECT               MIN(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr > n.nbr                                                                                        ) AS nxt_nbr                                                           FROM                dbo.Nums AS n                              ) AS d -- Peso 2 CREATE TABLE #Temp                                                         (                                                                                        ID INT IDENTITY(1, 1) PRIMARY KEY,                                                                                        nbr INT,                                                                                        val INT                                                           )   INSERT                                            #Temp                                                           (                                                                                        nbr,                                                                                        val                                                           ) SELECT                                            nbr,                                                           val FROM                                             dbo.Nums ORDER BY         nbr   SELECT                                            pre.nbr AS pre_nbr,                                                           pre.val AS pre_val,                                                           t.nbr,                                                           t.val,                                                           nxt.nbr AS nxt_nbr,                                                           nxt.val AS nxt_val FROM                                             #Temp AS pre RIGHT JOIN      #Temp AS t ON t.ID = pre.ID + 1 LEFT JOIN         #Temp AS nxt ON nxt.ID = t.ID + 1   DROP TABLE    #Temp Notice there are no indexes on #Temp table yet. And here is where the conversation derailed. First I got this response back Now my solutions: --My 1st Slt SELECT T2.*, T1.*, T3.*   FROM Nums AS T1        LEFT JOIN Nums AS T2          ON T2.nbr = (SELECT MAX(nbr)                         FROM Nums                        WHERE nbr < T1.nbr)        LEFT JOIN Nums AS T3          ON T3.nbr = (SELECT MIN(nbr)                         FROM Nums                        WHERE nbr > T1.nbr); --My 2nd Slt SELECT MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END) AS pre_nbr,        (SELECT val FROM Nums WHERE nbr = MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END)) AS pre_val,        N1.nbr AS cur_nbr, N1.val AS cur_val,        MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END) AS nxt_nbr,        (SELECT val FROM Nums WHERE nbr = MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END)) AS nxt_val   FROM Nums AS N1,        Nums AS N2  GROUP BY N1.nbr, N1.val;   /* My 1st Slt Table 'Nums'. Scan count 7, logical reads 14 My 2nd Slt Table 'Nums'. Scan count 4, logical reads 23 Peso 1 Table 'Nums'. Scan count 9, logical reads 28 Peso 2 Table '#Temp'. Scan count 0, logical reads 7 Table 'Nums'. Scan count 1, logical reads 2 Table '#Temp'. Scan count 3, logical reads 16 */  To this, I emailed him back asking for a scalability test What if you try with a Nums table with 100,000 rows? His response to that started to get nasty.  I have to say Peso 2 is not acceptable. As I said before the solution must be standard, ORDER BY is not part of standard SELECT. Try this without ORDER BY:  Truncate Table Nums INSERT INTO Nums (nbr, val) VALUES (1, 0),(9,4), (5, 7)  So now we have new rules. No ORDER BY because it's not standard SQL! Of course I asked him  Why do you have that idea? ORDER BY is not standard? To this, his replies went stranger and stranger Standard Select = Set-based (no any cursor) It’s free to know, just refer to Advanced SQL Programming by Celko or mail to him if you accept comments from him. What the stalker probably doesn't know, is that I and Mr Celko occasionally are involved in some conversation and thus we exchange emails. I don't know if this reference to Mr Celko was made to intimidate me either. So I answered him, still polite, this What do you mean? The SELECT itself has a ”cursor under the hood”. Now the stalker gets rude  But however I mean the solution must no containing any order by, top... No problem, I do not like Peso 2, it’s very non-intelligent and elementary. Yes, Peso 2 is elementary but most performing queries are... And now is the time where I started to feel the stalker really wanted to achieve something else, so I wrote to him So what is your goal? Have a query that performs well, or a query that is super-portable? My Peso 2 outperforms any of your code with a factor of 100 when using more than 100,000 rows. While I awaited his answer, I posted him this query Ok, here is another one -- Peso 3 SELECT             MAX(CASE WHEN d = 1 THEN nbr ELSE NULL END) AS pre_nbr,                    MAX(CASE WHEN d = 1 THEN val ELSE NULL END) AS pre_val,                    MAX(CASE WHEN d = 0 THEN nbr ELSE NULL END) AS nbr,                    MAX(CASE WHEN d = 0 THEN val ELSE NULL END) AS val,                    MAX(CASE WHEN d = -1 THEN nbr ELSE NULL END) AS nxt_nbr,                    MAX(CASE WHEN d = -1 THEN val ELSE NULL END) AS nxt_val FROM               (                              SELECT    nbr,                                        val,                                        ROW_NUMBER() OVER (ORDER BY nbr) AS SeqID                              FROM      dbo.Nums                    ) AS s CROSS JOIN         (                              VALUES    (-1),                                        (0),                                        (1)                    ) AS x(d) GROUP BY           SeqID + x.d HAVING             COUNT(*) > 1 And here is the stats Table 'Nums'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. It beats the hell out of your queries…. Now I finally got a response from my stalker and now I also clicked who he was. This is his reponse Why you post my original method with a bit change under you name? I do not like it. See: http://www.sqlservercentral.com/Forums/Topic468501-362-14.aspx ;WITH C AS ( SELECT seq_nbr, k,        DENSE_RANK() OVER(ORDER BY seq_nbr ASC) + k AS grp_fct   FROM [Sample]         CROSS JOIN         (VALUES (-1), (0), (1)         ) AS D(k) ) SELECT MIN(seq_nbr) AS pre_value,        MAX(CASE WHEN k = 0 THEN seq_nbr END) AS current_value,        MAX(seq_nbr) AS next_value   FROM C GROUP BY grp_fct HAVING min(seq_nbr) < max(seq_nbr); These posts: Posted Tuesday, April 12, 2011 10:04 AM Posted Tuesday, April 12, 2011 1:22 PM Why post a solution where will not work in SQL Server 2000? Wait a minute! His own solution is using both a CTE and a ranking function so his query will not work on SQL Server 2000! Bummer... The reference to "Me not like" are my exact words in a previous topic on SQLTeam.com and when I remembered the phrasing, I also knew who he was. See this topic http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 where he writes a query and posts it under my name, as if I wrote it. So I answered him this (less polite). Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea? Besides, “M S solution” doesn’t work.   This is the result I get pre_value        current_value                             next_value 1                           1                           5 1                           5                           9 5                           9                           9   And I did nothing like you did here, where you posted a solution which you “thought” I should write http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? After a few hours I get this email back. I don't fully understand it, but it's probably a language barrier. >>Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea?<< You right, but do not think you are the first creator of this.   >>Besides, “M S Solution” doesn’t work. This is the result I get <<   Why you get so unimportant mistake? See this post to correct it: Posted 4/12/2011 8:22:23 PM >> So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. <<  Again, why you get some unimportant incompatibility? You offer that solution for current goals not me  >> All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? <<  No, I only wanted to know who you will solve it. Now I know you do not have a special solution. No problem. No problem for me either. So I just answered him I am not the first, and you are not the first to come up with this idea. So what is your problem? I am pretty sure other people have come up with the same idea before us. I used this technique all the way back to 2007, see http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93911 Let's see if he returns...  He did! >> So what is your problem? << Nothing Thanks for all replies; maybe we have some competitions in future, maybe. Also I like you but you do not attend it. Your behavior with me is not friendly. Not any meeting… Regards //Peso

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >