Search Results

Search found 4167 results on 167 pages for 'rman 11gr2 duplicate'.

Page 142/167 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Teminal non-responsive on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • Bumblebee [ERROR]Cannot access secondary GPU - error: [XORG]

    - by Lunchbox
    Though this may seem like a duplicate question, none of the suggestions I've seen have worked for me, however nearly all posters get good results. I'll start with hardware: Metabox W350ST notebook Intel Core i7 4700 16GB RAM GTX 765M (with Optimus) 128GB SSD 1TB SSHD My initial error output when trying to optirun a game is: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [133.973920] [ERROR]Aborting because fallback start is disabled. If anything else is needed to troubleshoot this just let me know. Adding bumblebee.conf: # Configuration file for Bumblebee. Values should **not** be put between quotes ## Server options. Any change made in this section will need a server restart # to take effect. [bumblebeed] # The secondary Xorg server DISPLAY number VirtualDisplay=:8 # Should the unused Xorg server be kept running? Set this to true if waiting # for X to be ready is too long and don't need power management at all. KeepUnusedXServer=false # The name of the Bumbleblee server group name (GID name) ServerGroup=bumblebee # Card power state at exit. Set to false if the card shoud be ON when Bumblebee # server exits. TurnCardOffAtExit=false # The default behavior of '-f' option on optirun. If set to "true", '-f' will # be ignored. NoEcoModeOverride=false # The Driver used by Bumblebee server. If this value is not set (or empty), # auto-detection is performed. The available drivers are nvidia and nouveau # (See also the driver-specific sections below) Driver=nvidia # Directory with a dummy config file to pass as a -configdir to secondary X XorgConfDir=/etc/bumblebee/xorg.conf.d ## Client options. Will take effect on the next optirun executed. [optirun] # Acceleration/ rendering bridge, possible values are auto, virtualgl and # primus. Bridge=auto # The method used for VirtualGL to transport frames between X servers. # Possible values are proxy, jpeg, rgb, xv and yuv. VGLTransport=proxy # List of paths which are searched for the primus libGL.so.1 when using # the primus bridge PrimusLibraryPath=/usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus # Should the program run under optirun even if Bumblebee server or nvidia card # is not available? AllowFallbackToIGC=false # Driver-specific settings are grouped under [driver-NAME]. The sections are # parsed if the Driver setting in [bumblebeed] is set to NAME (or if auto- # detection resolves to NAME). # PMMethod: method to use for saving power by disabling the nvidia card, valid # values are: auto - automatically detect which PM method to use # bbswitch - new in BB 3, recommended if available # switcheroo - vga_switcheroo method, use at your own risk # none - disable PM completely # https://github.com/Bumblebee-Project/Bumblebee/wiki/Comparison-of-PM-methods ## Section with nvidia driver specific options, only parsed if Driver=nvidia [driver-nvidia] # Module name to load, defaults to Driver if empty or unset KernelDriver=nvidia PMMethod=auto # colon-separated path to the nvidia libraries LibraryPath=/usr/lib/nvidia-current:/usr/lib32/nvidia-current # comma-separated path of the directory containing nvidia_drv.so and the # default Xorg modules path XorgModulePath=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules XorgConfFile=/etc/bumblebee/xorg.conf.nvidia ## Section with nouveau driver specific options, only parsed if Driver=nouveau [driver-nouveau] KernelDriver=nouveau PMMethod=auto XorgConfFile=/etc/bumblebee/xorg.conf.nouveau DRIVER VERSION - Output of jockey-text -l: nvidia_304_updates - nvidia_304_updates (Proprietary, Enabled, Not in use)

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Teminal hands on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • My architecture has a problem with views that required information from different objects. How can I solve this?

    - by Oscar
    I am building an architecture like this: These are my SW layers ______________ | | | Views | |______________| ______________ | | |Business Logic| |______________| ______________ | | | Repository | |______________| My views are going to generate my HTML to be sent to the user Business logic is where all the business logics are Repository is a layer to access the DB My idea is that the repository uses entities (that are basically the representation of the tables, in order to perform DB queries. The layers communicate between themselves using Business Objects, that are objects that represent the real-world-object itself. They can contain business rules and methods. The views build/use DTOs, they are basically objects that have the information required to be shown on the screen. They expect also this kind of object on actions and, before calling the business logic, they create BO. First question: what is your overall feeling about this architecture? I've used similar architecture for some projects and I always got this problem: If my view has this list to show : Student1, age, course, Date Enrolled, Already paid? It has information from different BO. How do you think one should build the structure? These were the alternatives I could think of: The view layer could call the methods to get the student, then the course it studies, then the payment information. This would cause a lot of DB accesses and my view would have the knowledge about how to act to generate this information. This just seems wrong for me. I could have an "adapter object", that has the required information (a class that would have a properties Student, Course and Payment). But I would required one adapter object for each similar case, this may get very bad for big projects. I still don't like them. Would you have ideas? How would you change the architecture to avoid this kind of problems? @Rory: I read the CQRS and I don't think this suits my needs. As taken from a link references in your link Before describing the details of CQRS we need to understand the two main driving forces behind it: collaboration and staleness That means: many different actors using the same object (collaboration) and once data has been shown to a user, that same data may have been changed by another actor – it is stale (staleness). My problem is that I want to show to the user information from different BO, so I would need to receive them from the service layer. How can my service layer assemble and deliver this information? Edit to @AndrewM: Yes, you understood it correctly, the original idea was to have the view layer to build the BOs, but you have a very nice point about the creation of the BO inside the business layers. Assuming I follow your advice and move the creation logic inside the business layer, my business layer interface would contain the DTOs, for instance public void foo(MyDTO object) But as far as I understand, the DTO is tightly coupled to each view, so it would not be reusable by a second view. In order to use it, the second view would need to build a specific DTO from a specific view or I would have to duplicate the code in the business layer. Is this correct or am I missing something?

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Where should instantiated classes be stored?

    - by Eric C.
    I'm having a bit of a design dilemma here. I'm writing a library that consists of a bunch of template classes that are designed to be used as a base for creating content. For example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } The problem is, not only is the library providing the templates, it will also supply quite a few predefined templates which are instances of these template classes. The question is, where do I put these instances of the templates? The three solutions I've come up with so far are: 1) Provide serialized instances of the templates as files. On the one hand, this solution would keep the instances separated from the library itself, which is nice, but it would also potentially add complexity for the user. Even if we provided methods for loading/deserializing the files, they'd still have to deal with a bunch of files, and some kind of config file so the app knows where to look for those files. Plus, creating the template files would probably require a separate app, so if the user wanted to stick with the files method of storing templates, we'd have to provide some kind of app for creating the template files. Also, this requires external dependencies for testing the templates in the user's code. 2) Add readonly instances to the template class Example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template PredefinedTemplate { get { Template templateInstance = new Template(); templateInstance.Name = "Some Name"; templateInstance.Description = "A description"; ... return templateInstance; } } public Template() { //constructor } public void DoSomething() { //does something } ... } This method would be convenient for users, as they would be able to access the predefined templates in code directly, and would be able to unit test code that used them. The drawback here is that the predefined templates pollute the Template type namespace with a bunch of extra stuff. I suppose I could put the predefined templates in a different namespace to get around this drawback. The only other problem with this approach is that I'd have to basically duplicate all the namespaces in the library in the predefined namespace (e.g. Templates.SubTemplates and Predefined.Templates.SubTemplates) which would be a pain, and would also make refactoring more difficult. 3) Make the templates abstract classes and make the predefined templates inherit from those classes. For example: public abstract class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } and public class PredefinedTemplate : Template { public PredefinedTemplate() { this.Name = "Some Name"; this.Description = "A description"; this.Attribute1 = "Some Value"; ... } } This solution is pretty similar to #2, but it ends up creating a lot of classes that don't really do anything (none of our predefined templates are currently overriding behavior), and don't have any methods, so I'm not sure how good a practice this is. Has anyone else had any experience with something like this? Is there a best practice of some kind, or a different/better approach that I haven't thought of? I'm kind of banging my head against a wall trying to figure out the best way to go. Thanks!

    Read the article

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

  • Data Mining Email with Thunderbird

    - by user554629
    Oracle has many formal, searchable locations:  Service Requests, BugIDs, Technical Documents. These contain the results of an investigation for a customer crash situation;  they're created after the intense work of resolution is over, and typically contain the "root cause" of the failure ... but not the methods for identifying that cause. Email is still the standby for interacting with quickly formed groups of specialists, focusing on a particular incident.Customer BI, Network and System specialists;  Oracle Tech Support, Development, Consultants; OEM Database, OS technical support.   It is a chaotic, time-oriented set of configuration, call stacks, changes, techniques to discover and repair the failure. I needed to organize that information into something cohesive to prepare the blog entry on Teradata.  My corporate email client of choice is Thunderbird.   My original (flawed) search technique: R-Click on Inbox in Thunderbird left pane, and choose Search Messages Subject:  [ teradata ] Results: A new window titled "Search Messages"Single pane of selected messagesColumn headings:  Subject  From  Date  LocationNo preview window for messages There are 673 email entries in the result ( too many )  R-click icon just above the vertical scroll bar on the rightCheck [x] Tags Click on the Tags header to sort by "Important" View contents of message by double-clickingOpens in the Thunderbird Main Window in a new Tab Not what I was looking for, close the tab and try again. There has to be a better way.  ( and there is ) I need to be more productive, eliminating duplicate-chained messages, for example.   Even the Tag "Important" that was added during the investigation phase, is "not so much" for my current task. In the "Search Messages" window, click [ Save as Search Folder ] [ teradata ]  Appears as a new folder in my Inbox. Focus on that folder and the results appear with a list of messages like every other folder in the Inbox.Only the results of the search are shown A preview window is now available for each message Sort, Select message, Cursor Down ... navigates quickly through the messages. But wait, there's more ... Click Find ( Ctrl-F) Enter a search term for the message body, like.[ LIBPATH ] The search is "sticky" ... each message you cycle through wil focus ( and highlight) the LIBPATH search term. And still more .... Reset the Tag"Important" message.   Press "1" and the tag is removed Press "4" and a new Tag "ToDo" is applied After applying all of the tags, sort by Tag for a new message order Adjust the search criteria ... R-click on the [ teradata ] search folder, and choose Properties Add additional criteria to narrow the search Some of the information I'm looking for did not contain "teradata" in the subject line. + Body  [ contains ] [ Best Practices ] That's it.  Much more efficient search.   Thank you Thunderbird.

    Read the article

  • Is hidden content (display: none;) -indexed- by search engines? [closed]

    - by user568458
    Possible Duplicate: How bad is it to use display: none in CSS? We've established on this site before (in this question) that, since there are so many legitimate uses for hiding content with display: none; when creating interactive features, that sites aren't automatically penalised for content that is hidden this way (so long as it doesn't look algorithmically spammy). Google's Webmaster guidelines also make clear that a good practice when using content that is initially legitimately hidden for interactivity purposes is to also include the same content in a <noscript> tag, and Google recommend that if you design and code for users including users with screen readers or javascript disabled, then 9 times out of 10 good relevant search rankings will follow (though their specific advice seems more written for cases where javascript writes new content to the page). JavaScript: Place the same content from the JavaScript in a tag. If you use this method, ensure the contents are exactly the same as what’s contained in the JavaScript, and that this content is shown to visitors who do not have JavaScript enabled in their browser. So, best practice seems pretty clear. What I can't find out is, however, the simple factual matter of whether hidden content is indexed by search engines (but with potential penalties if it looks 'spammy'), or, whether it is ignored, or, whether it is indexed but with a lower weighting (like <noscript> content is, apparently). (for bonus points it would be great to know if this varies or is consistent between display: none;, visibility: hidden;, etc, but that isn't crucial). This is different to the other questions on display:none; and SEO - those are about good and bad practice and the answers are discussions of good and bad practice, I'm interested simply in the factual 'Yes or no' question of whether search engines index, or ignore, content that is in display: none; - something those other questions' answers aren't totally clear on. One other question has an answer, "Yes", supported by a link to an article that doesn't really clear things up: it establishes that search engines can spot that text is hidden, it discusses (again) whether hidden text causes sites to be marked as spam, and ultimately concludes that in mid 2011, Google's policy on hidden text was evolving, and that they hadn't at that time started automatically penalising display:none; or marking it as spam. It's clear that display: none; isn't always spam and isn't always treated as spam (many Google sites use it...): but this doesn't clear up how, or if, it is indexed. What I will do will be to follow the guidelines and make sure that all the content that is initially hidden which regular users can explore using javascript-driven interactivity is also structured in way that noscript/screenreader users can use. So I'm not interested in best practice, opinions etc because best practice seems to be really clear: accessibility best practices boosts SEO. But I'd like to know what exactly will happen: whether any display: none; content I have alongside <noscript> or otherwise accessibility-optimised content will be be ignored, or indexed again, or picked up to compare against the <noscript> content but not indexed... etc.

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • bulk insert and update with ADO.NET Entity Framework

    - by Keith Barrows
    I am writing a small application that does a lot of feed processing. I want to use LINQ EF for this as speed is not an issue, it is a single user app and, in the end, will only be used once a month. My questions revolves around the best way to do bulk inserts using LINQ EF. After parsing the incoming data stream I end up with a List of values. Since the end user may end up trying to import some duplicate data I would like to "clean" the data during insert rather than reading all the records, doing a for loop, rejecting records, then finally importing the remainder. This is what I am currently doing: DateTime minDate = dataTransferObject.Min(c => c.DoorOpen); DateTime maxDate = dataTransferObject.Max(c => c.DoorOpen); using (LabUseEntities myEntities = new LabUseEntities()) { var recCheck = myEntities.ImportDoorAccess.Where(a => a.DoorOpen >= minDate && a.DoorOpen <= maxDate).ToList(); if (recCheck.Count > 0) { foreach (ImportDoorAccess ida in recCheck) { DoorAudit da = dataTransferObject.Where(a => a.DoorOpen == ida.DoorOpen && a.CardNumber == ida.CardNumber).First(); if (da != null) da.DoInsert = false; } } ImportDoorAccess newIDA; foreach (DoorAudit newDoorAudit in dataTransferObject) { if (newDoorAudit.DoInsert) { newIDA = new ImportDoorAccess { CardNumber = newDoorAudit.CardNumber, Door = newDoorAudit.Door, DoorOpen = newDoorAudit.DoorOpen, Imported = newDoorAudit.Imported, RawData = newDoorAudit.RawData, UserName = newDoorAudit.UserName }; myEntities.AddToImportDoorAccess(newIDA); } } myEntities.SaveChanges(); } I am also getting this error: System.Data.UpdateException was unhandled Message="Unable to update the EntitySet 'ImportDoorAccess' because it has a DefiningQuery and no element exists in the element to support the current operation." Source="System.Data.SqlServerCe.Entity" What am I doing wrong? Any pointers are welcome.

    Read the article

  • How to programmatically add x509 certificate to local machine store using c#

    - by David
    I understand the question title may be a duplicate but I have not found an answer for my situation yet so here goes; I have this simple peice of code // Convert the Filename to an X509 Certificate X509Certificate2 cert = new X509Certificate2(certificateFilePath); // Get the server certificate store X509Store store = new X509Store(StoreName.TrustedPeople, StoreLocation.LocalMachine); store.Open(OpenFlags.MaxAllowed); store.Add(cert); // x509 certificate created from a user supplied filename But keep being presented with an "Access Denied" exception. I have read some information that suggests using StorePermissions would solve my issue but I don't think this is relevant in my code. Having said that, I did test it to to be sure and I couldn't get it to work. I also found suggestions that changing folder permissions within Windows was the way to go and while this may work(not tested), it doesn't seem practical for what will become distributed code. I also have to add that as the code will be running as a service on a server, adding the certificates to the current user store also seems wrong. Is there anyway to programmatically add a certificate into the local machine store?

    Read the article

  • Building a VS2010 solution from TFS2008

    - by slugster
    I have a TFS 2008 Build Agent that has been used to build .Net 3.5 applications. I now have a .Net 4.0 app which i want to compile on the same build agent. I have ensured that MSBuild 4.0 is installed on there and all the required componentry is also installed, but i am getting the following MSB4062 error when building: [Any CPU/Release] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(244,5): error MSB4062: The "Microsoft.WebApplication.Build.Tasks.GetSilverlightItemsFromProperty" task could not be loaded from the assembly C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. Confirm that the declaration is correct, and that the assembly and all its dependencies are available. I am presuming that i get this because the TFSBuild.proj gets executed by MSBuild 3.5 which in turn means my solution is compiled with MSBuild 3.5. Am i correct with my diagnosis? Is there any way to ensure that TFS2008 uses MSBuild 4.0 for my solution? Can it be done on a single team project so that it doesn't affect any other team projects being built on the same build agent? Note that i have checked the question Build failing - VS2010 solution on TFS2008 and this is not a duplicate. Thanks :)

    Read the article

  • ASPNET MVC - Override Html.TextBoxFor(model.property) with a new helper with same signature?

    - by JK
    I want to override Html.TextBoxFor() with my own helper that has the exact same signature (but a different namespace of course) - is this possible, and if so, how? The reason for this is that I have 100+ views in an already existing app, and I want to change the behaviour of TextBoxFor so that it outputs a maxLength=n attribute if the property has a [StringLength(n)] annotation. The code for automatically outputting maxlength=n is in this question: http://stackoverflow.com/questions/2386365/maxlength-attribute-of-a-text-box-from-the-dataannotations-stringlength-in-mvc2. But my question is not a duplicate - I am trying creating a more generic solution: where the DataAnnotaion flows into the html automatically without any need for additional code by the person writing the view. In the referenced question, you have to change every single Html.TexBoxFor to a Html.CustomTextBoxFor. I need to do it so that the existing TextBoxFor()'s do not need to be changed - hence creating a helper with the same signature: change the behaviour of the helper method, and all existing instances will just work without any changes (100+ views, at least 500 TextBoxFor()s - don't want to manually edit that). I tried this code: (And I need to repeat it for each overload of TextBoxFor, but once the root problem is solved, that will be trivial) namespace My.Helpers { public static class CustomTextBoxHelper { public static MvcHtmlString TextBoxFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, object htmlAttributes, bool includeLengthIfAnnotated) { // implementation here } } } But I am getting a compiler error in the view on Html.TextBoxFor(): "The call is ambiguous between the following methods or properties" (of course). Is there any way to do this? Is there an alternative approach that would allow me to change the behaviour of Html.TextBoxFor, so that the views that already use it do not need to be changed?

    Read the article

  • Spring AOP AfterThrowing vs. Around Advice

    - by whiskerz
    Hey there, when trying to implement an Aspect, that is responsible for catching and logging a certain type of error, I initially thought this would be possible using the AfterThrowing advice. However it seems that his advice doesn't catch the exception, but just provides an additional entry point to do something with the exception. The only advice which would also catch the exception in question would then be an AroundAdvice - either that or I did something wrong. Can anyone assert that indeed if I want to catch the exception I have to use an AroundAdvice? The configuration I used follows: @Pointcut("execution(* test.simple.OtherService.print*(..))") public void printOperation() {} @AfterThrowing(pointcut="printOperation()", throwing="exception") public void logException(Throwable exception) { System.out.println(exception.getMessage()); } @Around("printOperation()") public void swallowException(ProceedingJoinPoint pjp) throws Throwable { try { pjp.proceed(); } catch (Throwable exception) { System.out.println(exception.getMessage()); } } Note that in this example I caught all Exceptions, because it just is an example. I know its bad practice to just swallow all exceptions, but for my current use case I want one special type of exception to be just logged while avoiding duplicate logging logic.

    Read the article

  • NHibernate SubclassMap gives DuplicateMappingException

    - by stiank81
    I'm using NHibernate to handle my database - with Fluent configuration. I'm not using Automappings. All mappings are written explicitly, and everything is working just fine. Now I wanted to add my first mapping to a subclass, using the SubclassMap, and I run into problems. With the simplest possible setup an Nhibernate DuplicateMappingException is thrown, saying that the subclass is mapped more than once: NHibernate.MappingException : Could not compile the mapping document: (XmlDocument) ---- NHibernate.DuplicateMappingException : Duplicate class/entity mapping MyNamespace.SubPerson I get this with my simple classes written for testing: public class Person { public int Id { get; set; } public string Name { get; set; } } public class SubPerson : Person { public string Foo { get; set; } } With the following mappings: public class PersonMapping : ClassMap<Person> { public PersonMapping() { Not.LazyLoad(); Id(c => c.Id); Map(c => c.Name); } } public class SubPersonMapping : SubclassMap<SubPerson> { public SubPersonMapping() { Not.LazyLoad(); Map(m => m.Foo); } } Any idea why this is happening? If there were automappings involved I guess it might have been caused by the automappings adding a mapping too, but there should be no automapping. I create my database specifying a fluent mapping: private static ISession CreateSession() { var cfg = Fluently.Configure(). Database(SQLiteConfiguration.Standard.ShowSql().UsingFile("unit_test.db")). Mappings(m => m.FluentMappings.AddFromAssemblyOf<SomeClassInTheAssemblyContainingAllMappings>()); var sessionSource = new SessionSource(cfg.BuildConfiguration().Properties, new TestModel()); var session = sessionSource.CreateSession(); _sessionSource.BuildSchema(session); return session; } Again; note that this only happens with SubclassMap. ClassMap's are working just fine!

    Read the article

  • Get an IDataReader from a typed List

    - by Jason Kealey
    I have a List<MyObject> with a million elements. (It is actually a SubSonic Collection but it is not loaded from the database). I'm currently using SqlBulkCopy as follows: private string FastInsertCollection(string tableName, DataTable tableData) { string sqlConn = ConfigurationManager.ConnectionStrings[SubSonicConfig.DefaultDataProvider.ConnectionStringName].ConnectionString; using (SqlBulkCopy s = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.TableLock)) { s.DestinationTableName = tableName; s.BatchSize = 5000; s.WriteToServer(tableData); s.BulkCopyTimeout = SprocTimeout; s.Close(); } return sqlConn; } I use SubSonic's MyObjectCollection.ToDataTable() to build the DataTable from my collection. However, this duplicates objects in memory and is inefficient. I'd like to use the SqlBulkCopy.WriteToServer method that uses an IDataReader instead of a DataTable so that I don't duplicate my collection in memory. What's the easiest way to get an IDataReader from my list? I suppose I could implement a custom data reader (like here http://blogs.microsoft.co.il/blogs/aviwortzel/archive/2008/05/06/implementing-sqlbulkcopy-in-linq-to-sql.aspx) , but there must be something simpler I can do without writing a bunch of generic code. Edit: It does not appear that one can easily generate an IDataReader from a collection of objects. Accepting current answer even though I was hoping for something built into the framework.

    Read the article

  • Google Maps v3: Enforcing min. zoom level when using fitBounds

    - by chris
    I'm drawing a series of markers on a map (using v3 of the maps api). In v2, I had the following code: bounds = new GLatLngBounds(); ... loop thru and put markers on map ... bounds.extend(point); ... end looping map.setCenter(bounds.getCenter()); var level = map.getBoundsZoomLevel(bounds); if ( level == 1 ) level = 5; map.setZoom(level > 6 ? 6 : level); And that work fine to ensure that there was always an appropriate level of detail displayed on the map. I'm trying to duplicate this functionality in v3, but the setZoom and fitBounds don't seem to be cooperating: ... loop thru and put markers on the map var ll = new google.maps.LatLng(p.lat,p.lng); bounds.extend(ll); ... end loop var zoom = map.getZoom(); map.setZoom(zoom > 6 ? 6 : zoom); map.fitBounds(bounds); I've tried different permutation (moving the fitBounds before the setZoom, for example) but nothing I do with setZoom seems to affect the map. Am I missing something? Is there a way to do this?

    Read the article

  • SSRS 2008: is it possible to make a report parameter NOT query-based for some linked report?

    - by Stefan Mohr
    I suspect the answer is no, but here goes.. I'm using the WebForms Report Viewer on a public-facing website to allow users to report on themselves or their users (if the user is an admin user). A report has a parameter called Users where an admin can pick a user from the list and generate a report from it. Mundane users can also view this report, but I programmatically create a linked report for each user and set the UserID value to their ID so they can only view themselves. This works well except that the UserID parameter is query-based, and not every user is visible in the list using default settings (the user list is based off date range parameters can provide, and only users we consider 'active' during the date range are visible). This is blowing up for mundane users that are not active for the default date range (which is the previous month). I suspect the flow of execution is something like this: Report loads with default parameters The linked report rules are now applied and the value of the UserID is overridden with the ID in the linked report UserID field is now hidden to prevent the user from changing it SSRS can't find the UserID default value in the query results (that I didn't even want it to run) so it displays an error The 'UserID' parameter is missing a value Through some testing I've found a perfect correlation between users not inside the default date range and users who can't view the report. Can anyone suggest a way to make the report usable for those users that aren't in the default list? The reports are created programmatically so I do have a fair bit of control over the situation. I would love to simply be able to mark a parameter in a linked report as no longer being query-based, but those properties are all read-only. I really, really don't want to have to create duplicate reports to accommodate these users but I'm at a bit of a loss right now. Any suggestions are greatly appreciated!

    Read the article

  • Argument exception after trying to use TryGetObjectByKey

    - by Rickjaah
    Hi, EDIT: Somethings wrong.... I have to use objectContext.Frontpages.ToArray() before I can use TryGetObjectByEntityKey(). Any ideas anyone? I'm trying to retrieve an object from my database using entity (framework 4) When I use the following code it gives an ArgumentException: An item with the same key has already been added. if (databaseContext.TryGetObjectByKey(entityKey, out result)) { return (result != null && result is TEntityObject) ? result as TEntityObject : null; } else { return null; } When I check the objectContext, I see the entities, but only if I enumerate the specific list of entities manually using VS2010, it works. What am I missing? Do I have to do something else before i can get the item from the database? I searched google, but could not find any results, the same for the msdn library EDIT: Still working on this.... It's a weird problem. I retrieve a value, but get an error that says a duplicate item exists. STACKTRACE: [ArgumentException: An item with the same key has already been added.] System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +52 System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add) +9549131 System.Data.Metadata.Edm.ObjectItemAttributeAssemblyLoader.LoadRelationshipTypes() +661 System.Data.Metadata.Edm.ObjectItemAttributeAssemblyLoader.LoadTypesFromAssembly() +17 System.Data.Metadata.Edm.ObjectItemAssemblyLoader.Load() +25 System.Data.Metadata.Edm.ObjectItemAttributeAssemblyLoader.Load() +4 System.Data.Metadata.Edm.AssemblyCache.LoadAssembly(Assembly assembly, Boolean loadReferencedAssemblies, ObjectItemLoadingSessionData loadingData) +160 System.Data.Metadata.Edm.AssemblyCache.LoadAssembly(Assembly assembly, Boolean loadReferencedAssemblies, KnownAssembliesSet knownAssemblies, EdmItemCollection edmItemCollection, Action1 logLoadMessage, Object& loaderCookie, Dictionary2& typesInLoading, List1& errors) +166 System.Data.Metadata.Edm.ObjectItemCollection.LoadAssemblyFromCache(ObjectItemCollection objectItemCollection, Assembly assembly, Boolean loadReferencedAssemblies, EdmItemCollection edmItemCollection, Action`1 logLoadMessage) +316 System.Data.Metadata.Edm.MetadataWorkspace.ImplicitLoadAssemblyForType(Type type, Assembly callingAssembly) +306 System.Data.Metadata.Edm.MetadataWorkspace.ImplicitLoadFromEntityType(EntityType type, Assembly callingAssembly) +109 System.Data.Objects.ObjectContext.TryGetObjectByKey(EntityKey key, Object& value) +288 EDIT: Lazy loading is set to true. EDIT: Somethings wrong.... I have to use objectContext.Frontpages.ToArray() before I can use TryGetObjectByEntityKey(). Any ideas anyone?

    Read the article

  • HTML to RTF Converter for .NET

    - by nickyt
    I've already seen lots of posts on the site for RTF to HTML and some other posts talking about some HTML to RTF converters, but I'm really trying to get a full breakdown of what is considered the most widely used commercial product, open source product or if people recommend going home grown. Apologies if you consider this a duplicate question, but I'm trying to create a product matrix to see what is the most viable for our application. I also think this would be helpful for others. The converter would be used in an ASP.NET 2.0 application (we're upgrading to 3.5 shortly but still sticking with WebForms) using SQLServer 2005 (soon 2008) as the DB. From reading a few posts, SautinSoft appears to be popular as a commercial component. Are there other commercial components that you'd recommend for converting HTML to RTF? Price does matter, but even if it's a little on the expensive side, please list it. For open source, I read that OpenOffice.org can be run as a service so that it can convert files. However, this appears to be only Java based. I imagine, I'd need some kind of interop to use this? What .NET open source components, if any, are out there for converting HTML to RTF? For home grown, is an XSLT the way to go with XHTML? If so, what component do you recommend for generating XHTML? Otherwise, what other home grown avenuses do you recommend. Also, please note that I currently don't care so much about RTF to HTML. If a commercial component offers this and the price is still the same, fine, otherwise please don't mention it.

    Read the article

  • How can I copy a SQL record which has related records in other tables to the same database?

    - by DerekVS
    Hi. I created a function in C# which allows me to copy a record and its related children to a new record and new related children in the same database. (This is for an application that allows the use of previous work as a template for new work.) Anyway, it works great... Here's a description of how it accomplishes the copy: It populates a two-column memory-based look-up table with the current primary key of each record. Next, as it individually creates each new copy record, it updates the look-up table with the Identity PK of the new record [retrieved from SCOPE_IDENTITY()]. Now, when it copies over any related children, it can look up the new parent PK to set the FK on the new record. In testing, it only took a minute to copy a relational structure on a local instance of SQL Server 2005 Express Edition. Unfortunately it is proving to be horribly slow in production! My users are dealing with 60,000+ records per parent record over the LAN to our SQL Server! While my copy function still works, each of those records represents an individual SQL UPDATE command and it loads the SQL Server at about 17% CPU from its normal 2% idle. I just finished testing a 50,000 record copy and it took almost 20 minutes! Is there a way to duplicate this functionality in SQL queries or stored procecures to make the SQL server do all of the copy work instead of blasting it over the LAN from each client? (We're running Microsoft SQL Server 2005 Standard Edition.) Thanks! -Derek

    Read the article

  • how to rethrow same exception in sql server

    - by Shantanu Gupta
    I want to rethrow same exception in sql server that has been occured in my try block. I am able to throw same message but i want to throw same error. BEGIN TRANSACTION BEGIN TRY INSERT INTO Tags.tblDomain (DomainName, SubDomainId, DomainCode, Description) VALUES(@DomainName, @SubDomainId, @DomainCode, @Description) COMMIT TRANSACTION END TRY BEGIN CATCH declare @severity int; declare @state int; select @severity=error_severity(), @state=error_state(); RAISERROR(@@Error,@ErrorSeverity,@state); ROLLBACK TRANSACTION END CATCH RAISERROR(@@Error, @ErrorSeverity, @state); This line will show error, but i want functionality something like that. This raises error with error number 50000, but i want erron number to be thrown that i am passing @@error, I want to capture this error no at frontend i.e. catch (SqlException ex) { if ex.number==2627 MessageBox.show("Duplicate value cannot be inserted"); } I want this functionality. which can't be achieved using raiseerror. I dont want to give custom error message at back end.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >