Search Results

Search found 96005 results on 3841 pages for 'user group'.

Page 190/3841 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Passing XML data and user from coldfusion page to .NET page

    - by Mark Rullo
    I'd appreciate some input on this situation, I can't figure out the best way to do this. I have some data that's being prepared for me in a ColdFusion app and in an IFrame within the CF app we want to display some graphs (not strictly an image, it's an entire page) being generated on the .NET side of things. I'd like to pass XML data from the CF side to .NET as well as the user. On the .NET side I'm putting the data in a session so the user can sift through it without the need to have it re-queried and re-passed from CF. What I've tried: Generating XML with CF, putting it in a hidden form field, auto-submitting (with JS) a the form to the .NET side. The issue I'm having with this approach is the encoding being done on the form post. The data has entries like <entry data="hello &amp; goodbye">. It's an issue because it's being URL encdeded, Posted, and when I get it on the .NET side I get <entry data="hello & goodbye"> which isn't properly formed XML. What I'd like to avoid: An intermediary DB approach (dropping the data in a DB on CF, picking it up with .NET) I'd like to only display what is passed to the page. I have security concerns with the data, it's very sensitive. Passing the data to a webservice, returning a GUID, forwarding the user with a URL Parameter to access the passed in data. I think that'd be risky if someone happened on a link to that data. I can't take that risk. I was thinking of passing the data with JSON, but I'm very unfamiliar with it. Thoughts? Thanks for your time folks.

    Read the article

  • ASP.NET Custom/User Control With Children

    - by Bob Fincheimer
    I want a create a custom/user control that has children (NOT a template control). For Example, I want my control to have the following markup: <div runat="server" id="div"> <label runat="server" id="label"></label> <div class="field"> <!-- INSERT CHILDREN HERE --> </div> </div> and when I want to use it on a page I simply: <ctr:MyUserControl runat="server" ID="myControl"> <span>This is a child</span> <div>And another <b>child</b> </ctr:MyUserControl> The child controls inside my user control will be inserted into my user control somewhere. What is the best way to accomplish this. The functionality is similar to a asp:PlaceHolder but I want to add a couple more options as well as additional markup and the such. Also the child controls still need to be able to be accessed by the page.

    Read the article

  • “User Control” uses an incorrect event

    - by Lijo
    Hi, I am using two instances of a user control. But when I click on a button in the user control, both of the instances calls the function corresponding to the first event (AcknowledgeReceipt()) However, when I remove the first user control and clicks on the btnRequestClarification, it calls the correct method (RequestClarification()) Is there a way to correct this behavior? acknowledgeModificationPopupCtrl = (ConfirmationPopup)this.LoadControl("~/ConfirmationPopup.ascx"); plHConfirmationPopup.Controls.Add(acknowledgeModificationPopupCtrl); acknowledgeModificationPopupCtrl.ContinueClick += new EventHandler(AcknowledgeReceipt); RequiredFieldValidator reqFValidatorAcknowledge = (RequiredFieldValidator)acknowledgeModificationPopupCtrl.FindControl("reqValidatorTxt"); reqFValidatorAcknowledge.ValidationGroup = "AcknowledgeReceipt"; acknowledgeModificationPopupCtrl.ValidationGroup = "AcknowledgeReceipt"; btnAcknowledgeReceipt.Attributes.Add("onclick", "validateconfirmPopup('true','xx' ,’yyy','Note' ,'true'); return false;"); requestModificationPopupCtrl = (ConfirmationPopup)this.LoadControl("~/ConfirmationPopup.ascx"); plHConfirmationPopup.Controls.Add(requestModificationPopupCtrl); requestModificationPopupCtrl.ContinueClick += new EventHandler(RequestClarification); RequiredFieldValidator reqFValidator = (RequiredFieldValidator)requestModificationPopupCtrl.FindControl("reqValidatorTxt"); reqFValidator.ValidationGroup = "request"; requestModificationPopupCtrl.ValidationGroup = "request"; btnRequestClarification.Attributes.Add("onclick", "validateconfirmPopup('true',’kkk' ,’lll','ff' ,'true'); return false;"); Thanks Lijo

    Read the article

  • MS Access PIVOT with User Defined Field

    - by user2535359
    Any of you good souls please help!! I need to query the source table shown in the below. (NULL are blank fields) UNUM, Ticket, Overflow 1 , 135 , NULL 1 , 136 ,NULL 1, 137, NULL 1, 138, NULL 1, NULL, 2b 2, 135, NULL 2, 136, NULL 2, 137, NULL 3, 135, NULL 3, 136, NULL 3, 137,NULL 3, 138, NULL 3, 139, NULL 3, 140, NULL 3, NULL, 66a 4, NULL, 12a 5, NULL, 14a I need to generate the output as shown below. UserNum, Ticket1, Ticket2, Ticket3, Ticket4, Ticket5, Ticket6, Ticket7, Ticket8, Ticket9, Overflow 1, 135, 136, 137, 138, Null, Null, Null, Null, Null, 2b 2, 135, 136, 137, Null, Null, Null, Null, Null, Null, Null 3, 135, 136, 137, 138, 139, 140, Null, Null, Null, 66a 4, Null, Null, Null, Null, Null, Null, Null, Null, Null, 12a 5, Null, Null, Null, Null, Null, Null, Null, Null, Null, 14a The source table has multiple tickets assigned to user. There are always maximum of 9 tickets. The user either has a ticket or an overflow but here can be only overflow per user. I am having issue pivoting the data in Ticket column to pre-defined field names like Ticket1, Ticket2...

    Read the article

  • Rails Form Submission, User can't be blank

    - by pmanning
    I'm trying to create an event through an event form and I keep getting a form error that says "User can't be blank". The event needs a user_id to post a feed_item showing who created the event. Why can't this event get created? event.rb class Event < ActiveRecord::Base attr_accessible :description, :location, :title, :category_id, :start_date, :start_time, :end_date, :end_time, :image belongs_to :user belongs_to :category has_many :rsvps has_many :users, through: :rsvps, dependent: :destroy mount_uploader :image, ImageUploader validates :title, presence: true, length: { maximum: 60 } validates :user_id, presence: true create_events.rb (database) class CreateEvents < ActiveRecord::Migration def change create_table :events do |t| t.string :title t.date :start_date t.time :start_time t.date :end_date t.time :end_time t.string :location t.string :description t.integer :category_id t.integer :user_id t.timestamps end add_index :events, [:user_id, :created_at] end end events_controller.rb def new @event = Event.new @user = current_user end def create @event = current_user.events.build(params[:event]) if @event.save flash[:success] = "Sesh created!" redirect_to root_url else @feed_items = [] render 'static_pages/home' end end routes.rb SampleApp::Application.routes.draw do resources :users do member do get :following, :followers, :events end end resources :events do member do get :members end end root to: 'static_pages#home' events/new.html.erb <%= form_for @event, :html => {:multipart => true} do |f| %> <%= render 'shared/error_messages', object: @event %>

    Read the article

  • Sent Item code in java

    - by Farhan Khan
    I need urgent help, if any one can resolve my issue it will be very highly appriciated. I have create a SMS composer on jAVA netbians its on urdu language. the probelm is its not saving sent sms on Sent items.. i have tried my best to make the code but failed. Tomorrow is my last day to present the code on university please help me please below is the code that i have made till now. Please any one.... /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package newSms; import javax.microedition.io.Connector; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.wireless.messaging.MessageConnection; import javax.wireless.messaging.TextMessage; import org.netbeans.microedition.util.SimpleCancellableTask; /** * @author AHTISHAM */ public class composeurdu extends MIDlet implements CommandListener, ItemCommandListener, ItemStateListener { private boolean midletPaused = false; private boolean isUrdu; String numb=" "; Alert alert; //<editor-fold defaultstate="collapsed" desc=" Generated Fields ">//GEN-BEGIN:|fields|0| private Form form; private TextField number; private TextField textUrdu; private StringItem stringItem; private StringItem send; private Command exit; private Command sendMesg; private Command add; private Command urdu; private Command select; private SimpleCancellableTask task; //</editor-fold>//GEN-END:|fields|0| MessageConnection clientConn; private Display display; public composeurdu() { display = Display.getDisplay(this); } private void showMessage(){ display=Display.getDisplay(this); //numb=number.getString(); if(number.getString().length()==0 || textUrdu.getString().length()==0){ Alert alert=new Alert("error "); alert.setString(" Enter phone number"); alert.setTimeout(5000); display.setCurrent(alert); } else if(number.getString().length()>11){ Alert alert=new Alert("error "); alert.setString("invalid number"); alert.setTimeout(5000); display.setCurrent(alert); } else{ Alert alert=new Alert("error "); alert.setString("success"); alert.setTimeout(5000); display.setCurrent(alert); } } void showMessage(String message, Displayable displayable) { Alert alert = new Alert(""); alert.setTitle("Error"); alert.setString(message); alert.setType(AlertType.ERROR); alert.setTimeout(5000); display.setCurrent(alert, displayable); } //<editor-fold defaultstate="collapsed" desc=" Generated Methods ">//GEN-BEGIN:|methods|0| //</editor-fold>//GEN-END:|methods|0| //<editor-fold defaultstate="collapsed" desc=" Generated Method: initialize ">//GEN-BEGIN:|0-initialize|0|0-preInitialize /** * Initializes the application. It is called only once when the MIDlet is * started. The method is called before the * <code>startMIDlet</code> method. */ private void initialize() {//GEN-END:|0-initialize|0|0-preInitialize // write pre-initialize user code here //GEN-LINE:|0-initialize|1|0-postInitialize // write post-initialize user code here }//GEN-BEGIN:|0-initialize|2| //</editor-fold>//GEN-END:|0-initialize|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: startMIDlet ">//GEN-BEGIN:|3-startMIDlet|0|3-preAction /** * Performs an action assigned to the Mobile Device - MIDlet Started point. */ public void startMIDlet() {//GEN-END:|3-startMIDlet|0|3-preAction // write pre-action user code here switchDisplayable(null, getForm());//GEN-LINE:|3-startMIDlet|1|3-postAction // write post-action user code here form.setCommandListener(this); form.setItemStateListener(this); }//GEN-BEGIN:|3-startMIDlet|2| //</editor-fold>//GEN-END:|3-startMIDlet|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: resumeMIDlet ">//GEN-BEGIN:|4-resumeMIDlet|0|4-preAction /** * Performs an action assigned to the Mobile Device - MIDlet Resumed point. */ public void resumeMIDlet() {//GEN-END:|4-resumeMIDlet|0|4-preAction // write pre-action user code here //GEN-LINE:|4-resumeMIDlet|1|4-postAction // write post-action user code here }//GEN-BEGIN:|4-resumeMIDlet|2| //</editor-fold>//GEN-END:|4-resumeMIDlet|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: switchDisplayable ">//GEN-BEGIN:|5-switchDisplayable|0|5-preSwitch /** * Switches a current displayable in a display. The * <code>display</code> instance is taken from * <code>getDisplay</code> method. This method is used by all actions in the * design for switching displayable. * * @param alert the Alert which is temporarily set to the display; if * <code>null</code>, then * <code>nextDisplayable</code> is set immediately * @param nextDisplayable the Displayable to be set */ public void switchDisplayable(Alert alert, Displayable nextDisplayable) {//GEN-END:|5-switchDisplayable|0|5-preSwitch // write pre-switch user code here Display display = getDisplay();//GEN-BEGIN:|5-switchDisplayable|1|5-postSwitch if (alert == null) { display.setCurrent(nextDisplayable); } else { display.setCurrent(alert, nextDisplayable); }//GEN-END:|5-switchDisplayable|1|5-postSwitch // write post-switch user code here }//GEN-BEGIN:|5-switchDisplayable|2| //</editor-fold>//GEN-END:|5-switchDisplayable|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: commandAction for Displayables ">//GEN-BEGIN:|7-commandAction|0|7-preCommandAction /** * Called by a system to indicated that a command has been invoked on a * particular displayable. * * @param command the Command that was invoked * @param displayable the Displayable where the command was invoked */ public void commandAction(Command command, Displayable displayable) {//GEN-END:|7-commandAction|0|7-preCommandAction // write pre-action user code here if (displayable == form) {//GEN-BEGIN:|7-commandAction|1|16-preAction if (command == exit) {//GEN-END:|7-commandAction|1|16-preAction // write pre-action user code here exitMIDlet();//GEN-LINE:|7-commandAction|2|16-postAction // write post-action user code here } else if (command == sendMesg) {//GEN-LINE:|7-commandAction|3|18-preAction // write pre-action user code here String mno=number.getString(); String msg=textUrdu.getString(); if(mno.equals("")) { alert = new Alert("Alert"); alert.setString("Enter Mobile Number!!!"); alert.setTimeout(2000); display.setCurrent(alert); } else { try { clientConn=(MessageConnection)Connector.open("sms://"+mno); } catch(Exception e) { alert = new Alert("Alert"); alert.setString("Unable to connect to Station because of network problem"); alert.setTimeout(2000); display.setCurrent(alert); } try { TextMessage textmessage = (TextMessage) clientConn.newMessage(MessageConnection.TEXT_MESSAGE); textmessage.setAddress("sms://"+mno); textmessage.setPayloadText(msg); clientConn.send(textmessage); } catch(Exception e) { Alert alert=new Alert("Alert","",null,AlertType.INFO); alert.setTimeout(Alert.FOREVER); alert.setString("Unable to send"); display.setCurrent(alert); } } //GEN-LINE:|7-commandAction|4|18-postAction // write post-action user code here }//GEN-BEGIN:|7-commandAction|5|7-postCommandAction }//GEN-END:|7-commandAction|5|7-postCommandAction // write post-action user code here }//GEN-BEGIN:|7-commandAction|6| //</editor-fold>//GEN-END:|7-commandAction|6| //<editor-fold defaultstate="collapsed" desc=" Generated Method: commandAction for Items ">//GEN-BEGIN:|8-itemCommandAction|0|8-preItemCommandAction /** * Called by a system to indicated that a command has been invoked on a * particular item. * * @param command the Command that was invoked * @param displayable the Item where the command was invoked */ public void commandAction(Command command, Item item) {//GEN-END:|8-itemCommandAction|0|8-preItemCommandAction // write pre-action user code here if (item == number) {//GEN-BEGIN:|8-itemCommandAction|1|21-preAction if (command == add) {//GEN-END:|8-itemCommandAction|1|21-preAction // write pre-action user code here //GEN-LINE:|8-itemCommandAction|2|21-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|3|28-preAction } else if (item == send) { if (command == select) {//GEN-END:|8-itemCommandAction|3|28-preAction // write pre-action user code here //GEN-LINE:|8-itemCommandAction|4|28-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|5|24-preAction } else if (item == textUrdu) { if (command == urdu) {//GEN-END:|8-itemCommandAction|5|24-preAction // write pre-action user code here if (isUrdu) isUrdu = false; else { isUrdu = true; TextField tf = (TextField)item; } //GEN-LINE:|8-itemCommandAction|6|24-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|7|8-postItemCommandAction }//GEN-END:|8-itemCommandAction|7|8-postItemCommandAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|8| //</editor-fold>//GEN-END:|8-itemCommandAction|8| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: form ">//GEN-BEGIN:|14-getter|0|14-preInit /** * Returns an initialized instance of form component. * * @return the initialized component instance */ public Form getForm() { if (form == null) {//GEN-END:|14-getter|0|14-preInit // write pre-init user code here form = new Form("form", new Item[]{getNumber(), getTextUrdu(), getStringItem(), getSend()});//GEN-BEGIN:|14-getter|1|14-postInit form.addCommand(getExit()); form.addCommand(getSendMesg()); form.setCommandListener(this);//GEN-END:|14-getter|1|14-postInit // write post-init user code here form.setItemStateListener(this); // form.setCommandListener(this); }//GEN-BEGIN:|14-getter|2| return form; } //</editor-fold>//GEN-END:|14-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: number ">//GEN-BEGIN:|19-getter|0|19-preInit /** * Returns an initialized instance of number component. * * @return the initialized component instance */ public TextField getNumber() { if (number == null) {//GEN-END:|19-getter|0|19-preInit // write pre-init user code here number = new TextField("Number ", null, 11, TextField.PHONENUMBER);//GEN-BEGIN:|19-getter|1|19-postInit number.addCommand(getAdd()); number.setItemCommandListener(this); number.setDefaultCommand(getAdd());//GEN-END:|19-getter|1|19-postInit // write post-init user code here }//GEN-BEGIN:|19-getter|2| return number; } //</editor-fold>//GEN-END:|19-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: textUrdu ">//GEN-BEGIN:|22-getter|0|22-preInit /** * Returns an initialized instance of textUrdu component. * * @return the initialized component instance */ public TextField getTextUrdu() { if (textUrdu == null) {//GEN-END:|22-getter|0|22-preInit // write pre-init user code here textUrdu = new TextField("Message", null, 2000, TextField.ANY);//GEN-BEGIN:|22-getter|1|22-postInit textUrdu.addCommand(getUrdu()); textUrdu.setItemCommandListener(this);//GEN-END:|22-getter|1|22-postInit // write post-init user code here }//GEN-BEGIN:|22-getter|2| return textUrdu; } //</editor-fold>//GEN-END:|22-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: exit ">//GEN-BEGIN:|15-getter|0|15-preInit /** * Returns an initialized instance of exit component. * * @return the initialized component instance */ public Command getExit() { if (exit == null) {//GEN-END:|15-getter|0|15-preInit // write pre-init user code here exit = new Command("Exit", Command.EXIT, 0);//GEN-LINE:|15-getter|1|15-postInit // write post-init user code here }//GEN-BEGIN:|15-getter|2| return exit; } //</editor-fold>//GEN-END:|15-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: sendMesg ">//GEN-BEGIN:|17-getter|0|17-preInit /** * Returns an initialized instance of sendMesg component. * * @return the initialized component instance */ public Command getSendMesg() { if (sendMesg == null) {//GEN-END:|17-getter|0|17-preInit // write pre-init user code here sendMesg = new Command("send", Command.OK, 0);//GEN-LINE:|17-getter|1|17-postInit // write post-init user code here }//GEN-BEGIN:|17-getter|2| return sendMesg; } //</editor-fold>//GEN-END:|17-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: add ">//GEN-BEGIN:|20-getter|0|20-preInit /** * Returns an initialized instance of add component. * * @return the initialized component instance */ public Command getAdd() { if (add == null) {//GEN-END:|20-getter|0|20-preInit // write pre-init user code here add = new Command("add", Command.OK, 0);//GEN-LINE:|20-getter|1|20-postInit // write post-init user code here }//GEN-BEGIN:|20-getter|2| return add; } //</editor-fold>//GEN-END:|20-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: urdu ">//GEN-BEGIN:|23-getter|0|23-preInit /** * Returns an initialized instance of urdu component. * * @return the initialized component instance */ public Command getUrdu() { if (urdu == null) {//GEN-END:|23-getter|0|23-preInit // write pre-init user code here urdu = new Command("urdu", Command.OK, 0);//GEN-LINE:|23-getter|1|23-postInit // write post-init user code here }//GEN-BEGIN:|23-getter|2| return urdu; } //</editor-fold>//GEN-END:|23-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: stringItem ">//GEN-BEGIN:|25-getter|0|25-preInit /** * Returns an initialized instance of stringItem component. * * @return the initialized component instance */ public StringItem getStringItem() { if (stringItem == null) {//GEN-END:|25-getter|0|25-preInit // write pre-init user code here stringItem = new StringItem("string", null);//GEN-LINE:|25-getter|1|25-postInit // write post-init user code here }//GEN-BEGIN:|25-getter|2| return stringItem; } //</editor-fold>//GEN-END:|25-getter|2| //</editor-fold> //<editor-fold defaultstate="collapsed" desc=" Generated Getter: send ">//GEN-BEGIN:|26-getter|0|26-preInit /** * Returns an initialized instance of send component. * * @return the initialized component instance */ public StringItem getSend() { if (send == null) {//GEN-END:|26-getter|0|26-preInit // write pre-init user code here send = new StringItem("", "send", Item.BUTTON);//GEN-BEGIN:|26-getter|1|26-postInit send.addCommand(getSelect()); send.setItemCommandListener(this); send.setDefaultCommand(getSelect());//GEN-END:|26-getter|1|26-postInit // write post-init user code here }//GEN-BEGIN:|26-getter|2| return send; } //</editor-fold>//GEN-END:|26-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: select ">//GEN-BEGIN:|27-getter|0|27-preInit /** * Returns an initialized instance of select component. * * @return the initialized component instance */ public Command getSelect() { if (select == null) {//GEN-END:|27-getter|0|27-preInit // write pre-init user code here select = new Command("select", Command.OK, 0);//GEN-LINE:|27-getter|1|27-postInit // write post-init user code here }//GEN-BEGIN:|27-getter|2| return select; } //</editor-fold>//GEN-END:|27-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: task ">//GEN-BEGIN:|40-getter|0|40-preInit /** * Returns an initialized instance of task component. * * @return the initialized component instance */ public SimpleCancellableTask getTask() { if (task == null) {//GEN-END:|40-getter|0|40-preInit // write pre-init user code here task = new SimpleCancellableTask();//GEN-BEGIN:|40-getter|1|40-execute task.setExecutable(new org.netbeans.microedition.util.Executable() { public void execute() throws Exception {//GEN-END:|40-getter|1|40-execute // write task-execution user code here }//GEN-BEGIN:|40-getter|2|40-postInit });//GEN-END:|40-getter|2|40-postInit // write post-init user code here }//GEN-BEGIN:|40-getter|3| return task; } //</editor-fold>//GEN-END:|40-getter|3| /** * Returns a display instance. * @return the display instance. */ public Display getDisplay () { return Display.getDisplay(this); } /** * Exits MIDlet. */ public void exitMIDlet() { switchDisplayable (null, null); destroyApp(true); notifyDestroyed(); } /** * Called when MIDlet is started. * Checks whether the MIDlet have been already started and initialize/starts or resumes the MIDlet. */ public void startApp() { startMIDlet(); display.setCurrent(form); } /** * Called when MIDlet is paused. */ public void pauseApp() { midletPaused = true; } /** * Called to signal the MIDlet to terminate. * @param unconditional if true, then the MIDlet has to be unconditionally terminated and all resources has to be released. */ public void destroyApp(boolean unconditional) { } public void itemStateChanged(Item item) { if (item == textUrdu) { if (isUrdu) { stringItem.setText("urdu"); TextField tf = (TextField)item; String s = tf.getString(); char ch = s.charAt(s.length() - 1); s = s.substring(0, s.length() - 1); ch = Urdu.ToUrdu(ch); s = s + String.valueOf(ch); tf.setString(""); tf.setString(s); }//end if throw new UnsupportedOperationException("Not supported yet."); } } }

    Read the article

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • Upgraded users to Win7. Now getting "path not found" when saving files or opening attachments

    - by Matt Penner
    We have a Server 2008 AD environment with about 5k users. We just rolled out Windows 7 SP1 (were XP) with great success. However, about once a day we get a few calls that a user opens a file from their Documents (the folder is on the server and redirected), edits it and attempts to save but Win7 reports that the path is not found either because it doesn't exist or no permissions. The only way to fix it is to delete the profile. In addition we get about the same number but different users saying that they cannot open attachments from Outlook 2010 due to no permission. We have to edit the temp Outlook storage path in the registry to fix it (or delete the profile). I think the two issues may be related. What scares us is that we rolled out 1 month ago and had no calls of this nature until about 2 weeks ago. It started off as one or two but seems to be growing. Any ideas? We're going to open a Microsoft ticket but I wanted to seenif anyone else has run into this. Thanks!

    Read the article

  • Exim, hot to route local mail to other adress

    - by kheraud
    I have setuped an Exim4 server on my debian wheezy server. This mail server only sends mail coming from localhost. The purpose is sending mail for my website. I have cron tasks and other services generating mails for root user. These mails are not stored in /var/mail as before, but sent by exim to [email protected]. I try to make exim send mails for root to [email protected] rather than [email protected]. I tried adding a .forward in /root with [email protected] as content. I tried also changing /etc/aliases with root: [email protected]. The fact is that routing works for root@localhost but not for root which is resolved as [email protected] I tested how routing is resolved with exim -bt : root@srv02:~# exim -bt root@localhost R: system_aliases for root@localhost R: dnslookup for [email protected] [email protected] <-- root@localhost router = dnslookup, transport = remote_smtp host gmail-smtp-in.l.google.com [173.194.67.27] MX=5 host alt1.gmail-smtp-in.l.google.com [74.125.143.27] MX=10 host alt2.gmail-smtp-in.l.google.com [74.125.25.27] MX=20 host alt3.gmail-smtp-in.l.google.com [173.194.64.27] MX=30 host alt4.gmail-smtp-in.l.google.com [74.125.142.27] MX=40 root@srv02:~# exim -bt root R: dnslookup for [email protected] [email protected] router = dnslookup, transport = remote_smtp host aspmx.l.google.com [173.194.78.27] MX=1 host alt1.aspmx.l.google.com [74.125.143.27] MX=5 host alt2.aspmx.l.google.com [74.125.25.27] MX=5 host alt4.aspmx.l.google.com [74.125.142.27] MX=10 host alt3.aspmx.l.google.com [173.194.64.27] MX=10 I bet this is a matter of how my server is configured (rather than how exim is configured). But to understand well I would like to have a solution for both : how to have root resolved as root@localhost ? how to have [email protected] routed to [email protected] ?

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • How to train users converting from PC to Mac/Apple at a small non profit?

    - by Everette Mills
    Background: I am part of a team that provides volunteer tech support to a local non profit. We are in the position to obtain a grant to update almost all of our computers (many of them 5 to 7 year old machines running XP), provide laptops for users that need them, etc. We are considering switching our users from PC (WinXP) to Macs. The technical aspects of switching will not be an issue for the team. We are in the process of planning data conversions, machine setup, server changes, etc regardless of whether we switch to Macs or much newer PCs. About 1/4 of the staff uses or has access to a Mac at home, these users already understand the basics of using the equipment. We have another set of (generally younger) users that are technically savvy and while slightly inconvenienced and slowed for a few days should be able to switch over quickly. Finally, several members of the staff are older and have many issues using there computers today. We think in the long run switching to Macs may provide a better user experience, fewer IT headaches, and more effective use of computers. The questions we have is what resources and training (webpages, Books, online training materials or online courses) do you recommend that we provide to users to enable the switchover to happen smoothly. Especially, with a focus on providing different levels of training and support to users with different skill levels. If you have done this in your own organization, what steps were successful, what areas were less successful?

    Read the article

  • Get User SID From Logon ID (Windows XP and Up)

    - by Dave Ruske
    I have a Windows service that needs to access registry hives under HKEY_USERS when users log on, either locally or via Terminal Server. I'm using a WMI query on win32_logonsession to receive events when users log on, and one of the properties I get from that query is a LogonId. To figure out which registry hive I need to access, now, I need the users's SID, which is used as a registry key name beneath HKEY_USERS. In most cases, I can get this by doing a RelatedObjectQuery like so (in C#): RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); where "logonID" is the logon session ID from the session query. Running the RelatedObjectQuery will generally give me a SID property that contains exactly what I need. There are two issues I have with this. First and most importantly, the RelatedObjectQuery will not return any results for a domain user that logs in with cached credentials, disconnected from the domain. Second, I'm not pleased with the performance of this RelatedObjectQuery --- it can take up to several seconds to execute. Here's a quick and dirty command line program I threw together to experiment with the queries. Rather than setting up to receive events, this just enumerates the users on the local machine: using System; using System.Collections.Generic; using System.Text; using System.Management; namespace EnumUsersTest { class Program { static void Main( string[] args ) { ManagementScope scope = new ManagementScope( "\\\\.\\root\\cimv2" ); string queryString = "select * from win32_logonsession"; // for all sessions //string queryString = "select * from win32_logonsession where logontype = 2"; // for local interactive sessions only ManagementObjectSearcher sessionQuery = new ManagementObjectSearcher( scope, new SelectQuery( queryString ) ); ManagementObjectCollection logonSessions = sessionQuery.Get(); foreach ( ManagementObject logonSession in logonSessions ) { string logonID = logonSession["LogonId"].ToString(); Console.WriteLine( "=== {0}, type {1} ===", logonID, logonSession["LogonType"].ToString() ); RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); ManagementObjectSearcher userQuery = new ManagementObjectSearcher( scope, relatedQuery ); ManagementObjectCollection users = userQuery.Get(); foreach ( ManagementObject user in users ) { PrintProperties( user.Properties ); } } Console.WriteLine( "\nDone! Press a key to exit..." ); Console.ReadKey( true ); } private static void PrintProperty( PropertyData pd ) { string value = "null"; string valueType = "n/a"; if ( null == pd.Value ) value = "null"; if ( pd.Value != null ) { value = pd.Value.ToString(); valueType = pd.Value.GetType().ToString(); } Console.WriteLine( " \"{0}\" = ({1}) \"{2}\"", pd.Name, valueType, value ); } private static void PrintProperties( PropertyDataCollection properties ) { foreach ( PropertyData pd in properties ) { PrintProperty( pd ); } } } } So... is there way to quickly and reliably obtain the user SID given the information I retrieve from WMI, or should I be looking at using something like SENS instead?

    Read the article

  • Android Alert Dialog Extract User Choice Radio Button?

    - by kel196
    Apologies for any coding ignorance, I am a beginner and am still learning! I am creating a quiz app which reads questions off a SQL database and plots them as points on a google map. As the user clicks each point, an alert dialog appears with radio buttons to answer some question. My radiobuttons are in CustomItemizedOverlay file and when I click submit, I want to be able to send the user choice back to the database, save it and return to the user if their answer is correct. My question is this, how do I pass the user choice out once the submit button has been clicked? I tried making a global variable to read what was selected to no avail. Any suggestions would be appreciated! If I need to post any other code, please ask and I will do so ASAP! package uk.ac.ucl.cege.cegeg077.uceskkw; import java.util.ArrayList; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.graphics.drawable.Drawable; import android.widget.Toast; import com.google.android.maps.ItemizedOverlay; import com.google.android.maps.OverlayItem; public class CustomItemizedOverlay2 extends ItemizedOverlay<OverlayItem> { private ArrayList<OverlayItem> mapOverlays = new ArrayList<OverlayItem>(); private Context context; public CustomItemizedOverlay2(Drawable defaultMarker) { super(boundCenterBottom(defaultMarker)); } public CustomItemizedOverlay2(Drawable defaultMarker, Context context) { this(defaultMarker); this.context = context; } @Override protected OverlayItem createItem(int i) { return mapOverlays.get(i); } @Override public int size() { return mapOverlays.size(); } @Override protected boolean onTap(int index) { OverlayItem item = mapOverlays.get(index); // gets the snippet from ParseKMLAndDisplayOnMap and splits it back into // a string. final CharSequence allanswers[] = item.getSnippet().split("::"); AlertDialog.Builder dialog = new AlertDialog.Builder(context); dialog.setIcon(R.drawable.easterbunnyegg); dialog.setTitle(item.getTitle()); dialog.setSingleChoiceItems(allanswers, -1, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { Toast.makeText(context, "You have selected " + allanswers[whichButton], Toast.LENGTH_SHORT).show(); } }); dialog.setPositiveButton(R.string.button_submit, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { dialog.dismiss(); // int selectedPosition = ((AlertDialog) // dialog).getListView().getCheckedItemPosition(); Toast.makeText(context, "hi!", Toast.LENGTH_SHORT) .show(); } }); dialog.setNegativeButton(R.string.button_close, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { dialog.dismiss(); // on cancel button action } }); AlertDialog question = dialog.create(); question.show(); return true; } public void addOverlay(OverlayItem overlay) { mapOverlays.add(overlay); this.populate(); } }

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Is there such thing like a "refactoring/maintainability group" role in software companies?

    - by dukeofgaming
    So, I work in a company that does embedded software development, other groups focus in the core development of different products' software and my department (which is in another geographical location) which is located at the factory has to deal with software development as well, but across all products, so that we can also fix things quicker when the lines go down due to software problems with the product. In other words, we are generalists while other groups specialize on each product. Thing is, it is kind of hard to get involved in core development when you are distributed geographically (well, I know it really isn't that hard, but there might be unintended cultural/political barriers when it comes to the discipline of collaborating remotely). So I figured that, since we are currently just putting fires out and somewhat being idle/sub-utilized (even though we are a new department, or maybe that is the reason), I thought that a good role for us could be detecting areas of opportunity of refactoring and rearchitecting code and all other implementations that might have to do with stewarding maintainability and modularity. Other groups aren't focused on this because they don't have the time and they have aggressive deadlines, which damage the quality of the code (eternal story of software projects) The thing is that I want my group/department to be recognized by management and other groups with this role officially, and I'm having trouble to come up with a good definition/identity of our group for this matter. So my question is: is this role something that already exists?, or am I the first one to make something like this up?

    Read the article

  • Why does MOSS sometimes delete an existing user from a site?

    - by Jesse
    I'm experiencing an issue with a MOSS installation. I am using the Site Settings Permissions to add an Active Directory account as a valid user of a site. This entails validating that the user account name is correct via the 'Check Names' button, then giving them 'Contribute' permissions. Once this is done they appear as a user on the 'All People' page. This works fine and the user is able to access the site. At some point in the future (sometimes several days later) the user account is somehow removed as a valid user from the site. This site resides in a test environment so access is pretty well controlled; which has allowed us to rule out someone else going in and removing the user manually. This appears to be something that is being done by the system itself and we have no idea why. We can manually add the user back, but then it will eventually get removed again later. I have an admittedly limited understanding of SharePoint permissions, but I believe that SharePoint stores valid users in a SQL database and I would assume that when dealing with Active Directory accounts it would be storing the user name and probably the SID. It appears that for some reason this record is later getting deleted out of the database, as the users will suddenly disappear from the "All People" page and will start getting "Access Denied: You are not authorized..." messages when trying to access the site. Has anyone seen this behavior before?

    Read the article

  • auth user and exec a node app only with apache?

    - by Blame
    I couldn't find an answer on the web and I'm trying for days now so I hope that someone with more experience with apache can help me out. Iam writing an web editor and the user should be able to edit a file that is on the server in a directory the user has access to. The problem Iam facing is that I need to authenticate against the system users (shadow/passwd). So the user should be able to login whith a system account and then the node app which does all the logic should be started with the users rights. I hope to get this working without any additional script and only with Apache. I found out two things: I can use mod_auth_pam to authenticate the user There is a mod called suEXEC which can exec the node app with a specified user The problem is that I have to hard code which user is used by suEXEC but I want to decide when the user logs in. Is there any way to authenticate a user against the shadow/passwd and then exec a prog with the users rights? I dont want to run the node app as root and the user should only be able to access his own files. Any help would be appreciated! Thanks, Kodak

    Read the article

  • PHP Multiple User Login Form - Navigation to Different Pages Based on Login Credentials

    - by Zulu Irminger
    I am trying to create a login page that will send the user to a different index.php page based on their login credentials. For example, should a user with the "IT Technician" role log in, they will be sent to "index.php", and if a user with the "Student" role log in, they will be sent to the "student/index.php" page. I can't see what's wrong with my code, but it's not working... I'm getting the "wrong login credentials" message every time I press the login button. My code for the user login page is here: <?php session_start(); if (isset($_SESSION["manager"])) { header("location: http://www.zuluirminger.com/SchoolAdmin/index.php"); exit(); } ?> <?php if (isset($_POST["username"]) && isset($_POST["password"]) && isset($_POST["role"])) { $manager = preg_replace('#[^A-Za-z0-9]#i', '', $_POST["username"]); $password = preg_replace('#[^A-Za-z0-9]#i', '', $_POST["password"]); $role = preg_replace('#[^A-Za-z0-9]#i', '', $_POST["role"]); include "adminscripts/connect_to_mysql.php"; $sql = mysql_query("SELECT id FROM Users WHERE username='$manager' AND password='$password' AND role='$role' LIMIT 1"); $existCount = mysql_num_rows($sql); if (($existCount == 1) && ($role == 'IT Technician')) { while ($row = mysql_fetch_array($sql)) { $id = $row["id"]; } $_SESSION["id"] = $id; $_SESSION["manager"] = $manager; $_SESSION["password"] = $password; $_SESSION["role"] = $role; header("location: http://www.zuluirminger.com/SchoolAdmin/index.php"); } else { echo 'Your login details were incorrect. Please try again <a href="http://www.zuluirminger.com/SchoolAdmin/index.php">here</a>'; exit(); } } ?> <?php if (isset($_POST["username"]) && isset($_POST["password"]) && isset($_POST["role"])) { $manager = preg_replace('#[^A-Za-z0-9]#i', '', $_POST["username"]); $password = preg_replace('#[^A-Za-z0-9]#i', '', $_POST["password"]); $role = preg_replace('#[^A-Za-z0-9]#i', '', $_POST["role"]); include "adminscripts/connect_to_mysql.php"; $sql = mysql_query("SELECT id FROM Users WHERE username='$manager' AND password='$password' AND role='$role' LIMIT 1"); $existCount = mysql_num_rows($sql); if (($existCount == 1) && ($role == 'Student')) { while ($row = mysql_fetch_array($sql)) { $id = $row["id"]; } $_SESSION["id"] = $id; $_SESSION["manager"] = $manager; $_SESSION["password"] = $password; $_SESSION["role"] = $role; header("location: http://www.zuluirminger.com/SchoolAdmin/student/index.php"); } else { echo 'Your login details were incorrect. Please try again <a href="http://www.zuluirminger.com/SchoolAdmin/index.php">here</a>'; exit(); } } ?> And the form that the data is pulled from is shown here: <form id="LoginForm" name="LoginForm" method="post" action="http://www.zuluirminger.com/SchoolAdmin/user_login.php"> User Name:<br /> <input type="text" name="username" id="username" size="50" /><br /> <br /> Password:<br /> <input type="password" name="password" id="password" size="50" /><br /> <br /> Log in as: <select name="role" id="role"> <option value="">...</option> <option value="Head">Head</option> <option value="Deputy Head">Deputy Head</option> <option value="IT Technician">IT Technician</option> <option value="Pastoral Care">Pastoral Care</option> <option value="Bursar">Bursar</option> <option value="Secretary">Secretary</option> <option value="Housemaster">Housemaster</option> <option value="Teacher">Teacher</option> <option value="Tutor">Tutor</option> <option value="Sanatorium Staff">Sanatorium Staff</option> <option value="Kitchen Staff">Kitchen Staff</option> <option value="Parent">Parent</option> <option value="Student">Student</option> </select><br /> <br /> <input type="submit" name = "button" id="button" value="Log In" onclick="javascript:return validateLoginForm();" /> </h3> </form> Once logged in (and should the correct page be loaded, the validation code I have at the top of the script looks like this: <?php session_start(); if (!isset($_SESSION["manager"])) { header("location: http://www.zuluirminger.com/SchoolAdmin/user_login.php"); exit(); } $managerID = preg_replace('#[^0-9]#i', '', $_SESSION["id"]); $manager = preg_replace('#[^A-Za-z0-9]#i', '', $_SESSION["manager"]); $password = preg_replace('#[^A-Za-z0-9]#i', '', $_SESSION["password"]); $role = preg_replace('#[^A-Za-z0-9]#i', '', $_SESSION["role"]); include "adminscripts/connect_to_mysql.php"; $sql = mysql_query("SELECT id FROM Users WHERE username='$manager' AND password='$password' AND role='$role' LIMIT 1"); $existCount = mysql_num_rows($sql); if ($existCount == 0) { header("location: http://www.zuluirminger.com/SchoolAdmin/index.php"); exit(); } ?> Just so you're aware, the database table has the following fields: id, username, password and role. Any help would be greatly appreciated! Many thanks, Zulu

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >