Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 96/906 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Detect multitouch (two fingers touch) on a sprite to apply pinch zoom behaviour

    - by Tahreem
    I am using andengine, want to move, zoom and rotate multiple sprites individually on a scene. I have achieved "move" but for pinch zoom i am unable to get the event to two fingers' touch. Below is the code: public class Main extends SimpleBaseGameActivity { private Camera camera; private BitmapTextureAtlas mBitmapTextureAtlas; private ITextureRegion mFaceTextureRegion; private ITextureRegion mFaceTextureRegion2; Sprite face2; private static final int CAMERA_WIDTH = 800; private static final int CAMERA_HEIGHT = 480; @Override public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new RatioResolutionPolicy( CAMERA_WIDTH, CAMERA_HEIGHT), camera); return engineOptions; } @Override protected void onCreateResources() { BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/"); this.mBitmapTextureAtlas = new BitmapTextureAtlas( this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST); BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(this.mBitmapTextureAtlas, // this, "ui_ball_1.png", 0, 0, 1, 1), // this.getVertexBufferObjectManager()); this.mFaceTextureRegion = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mFaceTextureRegion2 = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mBitmapTextureAtlas.load(); this.mEngine.getTextureManager().loadTexture(this.mBitmapTextureAtlas); } @Override protected Scene onCreateScene() { this.mEngine.registerUpdateHandler(new FPSLogger()); final Scene scene = new Scene(); scene.setBackground(new Background(0.09804f, 0.6274f, 0.8784f)); final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion .getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion .getHeight()) / 2; final Sprite face = new Sprite(centerX, centerY, this.mFaceTextureRegion, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth() / 2, pSceneTouchEvent.getY() - this.getHeight() / 2); return true; } }; face.setScale(2); scene.attachChild(face); scene.registerTouchArea(face); face2 = new Sprite(200, 200, this.mFaceTextureRegion2, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { switch(pSceneTouchEvent.getAction()){ case TouchEvent.ACTION_DOWN: int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; for(int i= 0; i <count; i++) { int id = pSceneTouchEvent.getMotionEvent().getPointerId(i); } break; case TouchEvent.ACTION_MOVE: this.setPosition(pSceneTouchEvent.getX() -this.getWidth() / 2, pSceneTouchEvent.getY()-this.getHeight() / 2); break; case TouchEvent.ACTION_UP: break; } return true; } }; face2.setScale(2); scene.attachChild(face2); scene.registerTouchArea(face2); scene.setTouchAreaBindingOnActionDownEnabled(true); return scene; } } This line int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; should set 2 to the count variable if i touch the sprite with to fingers, then i can apply zooming functionality (setScale method) on the sprite by getting the distance between the coordinates of two fingers. Can anyone help me? why it does not detect two fingers? and without this how can i zoom the sprite on pinch of two fingers? I am very new to game development, any help would be appreciated. Thanks in advance.

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • Teminal non-responsive on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • Teminal hands on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • MVC Communication Pattern

    - by Kedu
    This is kind of a follow up question to this http://stackoverflow.com/questions/23743285/model-view-controller-and-callbacks, but I wanted to post it separately, because its kind of a different topic. I'm working on a multiplayer cardgame for the Android platform. I split the project into MVC which fits the needs pretty good, but I'm currently stuck because I can't figure out a good way to communicate between the different parts. I have everything setup and working with the controller being a big state machine, which is called over and over from the gameloop, and calls getter methods from the GUI and the android/network part to get the input. The input itself in the GUI and network is set by inputlisteners that set a local variable which I read in the getter method. So far so good, this is working. But my problem is, the controller has to check every input separately,so if I want to add an input I have to check in which states its valid and call the getter method from all these states. This is not good, and lets the code look pretty ugly, makes additions uncomfortable and adds redundance. So what I've got from the question I mentioned above is that some kind of command or event pattern will fit my needs. What I want to do is to create a shared and threadsafe queue in the controller and instead of calling all these getter methods, I just check the queue for new input and proceed it. On the other side, the GUI and network don't have all these getters, but instead create an event or command and send it to the controller through, for example, observer/observable. Now my problem: I can't figure out a way, for these commands/events to fit a common interface (which the queue can store) and still transport different kind of data (button clicks, cards that are played, the player id the command comes from, synchronization data etc.). If I design the communication as command pattern, I have to stick all the information that is needed to execute the command into it when its created, that's impossible because the GUI or network has no knowledge of all the things the controller needs to execute stuff that needs to be done when for example a card is played. I thought about getting this stuff into the command when executing it. But over all the different commands I have, I would need all the information the controller has, and thus give the command a reference to the controller which would make everything in it public, which is real bad design I guess. So, I could try some kind of event pattern. I have to transport data in the event. So, like the command, I would have an interface, which all events have in common, and can be stored in the shared queue. I could create a big enum with all the different events that a are possible, save one of these enums in the actual event, and build a big switch case for the events, to proceed different stuff for different events. The problem here: I have different data for all the events. But I need a common interface, to store the events in a queue. How do I get the specific data, if I can only access the event through the interface? Even if that wouldn't be a problem, I'm creating another big switch case, which looks ugly, and when i want to add a new event, I have to create the event itself, the case, the enum, and the method that's called with the data. I could of course check the event with the enum and cast it to its type, so I can call event type specific methods that give me the data I need, but that looks like bad design too.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Jquery mobile and Google maps [on hold]

    - by Jack
    I have been trying to get my google maps to display within a page of a mobile app. The map will display for a second, and then disappear. I have read about a jquery bug, but i can't seem to find a way to get this code to work. any help would be greatly appreciated. <script> var geocoder; var currentLocation; var searchResults; var map; var directionsDisplay; var directionsService; function init(){ geocoder = new google.maps.Geocoder(); if (navigator.geolocation){ navigator.geolocation.watchPosition(showLocation, locationError); } else { alert("Geolocation not supported on this device"); return; } }//init function function showLocation(location){//start showlocation currentLocation = new google.maps.LatLng(location.coords.latitude, location.coords.longitude); $('#lat').attr("value", currentLocation.lat()); $('#lng').attr("value", currentLocation.lng()); geocoder = new google.maps.Geocoder(); geocoder.geocode({'latLng': currentLocation}, function(results, status){ if (status == google.maps.GeocoderStatus.OK){ if (results[0]){ var address = results[0].formatted_address; $('#loc').html(results[0].formatted_address); var info = "Latitude: " + location.coords.latitude + " Longitude: " + location.coords.longitude + "<br />"; info += "Location accurate within " + location.coords.accuracy + " meters <br /> Last Update: " + new Date(location.timestamp).toLocaleString(); $('#acc').html(info); $('#address').attr("value", results[0].formatted_address); }else{ alert('No results found'); }//end else //if(!map) initMap(); }else { $('#loc').html('Geocoder failed due to: ' + status); }//end else });//end of function if (!map) initMap(); }//end showlocation function function locationError(error){ switch(error.code) { case error.PERMISSION_DENIED: alert("Geolocation access denied or disabled. To enable geolocation on your iPhone, go to Settings > General> Location Services"); break; case error.POSITION_UNAVAILABLE: alert("Current location not available"); break; case error.TIMEOUT: alert("Timeout"); break; default: alert("unkown error"); break; }//endswitch }//endlocationerror function initMap(){ var mapOptions = { zoom: 14, mapTypeId: google.maps.MapTypeId.ROADMAP, center: currentLocation };//var mapOptions map = new google.maps.Map(document.getElementById('mapDiv'), mapOptions); google.maps.event.trigger(map, 'resize'); var bounds = new google.maps.LatLngBounds(); bounds.extend(currentLocation); map.fitBounds(bounds); //new code //var center; //function calculateCenter(){ //center = map.getCenter(); //} //google.maps.even.addDomListener(map, 'idle', function(){ //calculateCenter(); //}); //google.maps.even.addListenerOnce(map, 'idle', function(){ //google.maps.even.trigger(map,'resize'); //}); //google.maps.event.addDomListener(window, 'resize', function() { //map.setCenter(center); //});//end new code }//end initMap() //------------------------------------------------------------------------------- $(document).on("pageinit", init);

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • how to access mysql table from wamp database using this php code? [migrated]

    - by user3909877
    how to access tables from database by using php in wamp server.i have done the following code but its not working for some reason.is there anything to put in 'action=""'.it is not giving any error but displaying the same page.i want to display table from database on any different entry in dropdown menu and pressing search button.. <p class="h2">Quick Search</p> <div class="sb2_opts"> <p> </p> <form method="post" action="" > <p>Enter your source and destination.</p> <p> From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p> To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php if(isset($_POST['from']) and isset($_POST['To'])) { $from = $_POST['from'] ; $to = $_POST['To'] ; $table = array($from, $to); $con=mysqli_connect("localhost"); $mydb=mysql_select_db("homedb"); if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } switch ($table) { case array ("Islamabad", "Lahore") : $result = mysqli_query($con,"SELECT * FROM flights"); echo "</flights>"; //table name is flights break; case array ("Islamabad", "Murree") : $result = mysqli_query($con,"SELECT * FROM `isb to murree`"); echo "</`isb to murree`>"; //table name isb to murree ; break; case array ("Islamabad", "Muzaffarabad") : $result = mysqli_query($con,"SELECT * FROM `isb to muzz`"); echo "</`isb to muzz`>"; break; //..... //...... default: echo "Your choice is nor valid !!"; } } mysqli_close($con); ?>

    Read the article

  • DotNetOpenAuth: Message signature was incorrect

    - by Shawn Miller
    I'm getting a "Message signature was incorrect" exception when trying to authenticate with MyOpenID and Yahoo. I'm using pretty much the ASP.NET MVC sample code that came with DotNetOpenAuth 3.4.2 public ActionResult Authenticate(string openid) { var openIdRelyingParty = new OpenIdRelyingParty(); var authenticationResponse = openIdRelyingParty.GetResponse(); if (authenticationResponse == null) { // Stage 2: User submitting identifier Identifier identifier; if (Identifier.TryParse(openid, out identifier)) { var realm = new Realm(Request.Url.Root() + "openid"); var authenticationRequest = openIdRelyingParty.CreateRequest(openid, realm); authenticationRequest.RedirectToProvider(); } else { return RedirectToAction("login", "home"); } } else { // Stage 3: OpenID provider sending assertion response switch (authenticationResponse.Status) { case AuthenticationStatus.Authenticated: { // TODO } case AuthenticationStatus.Failed: { throw authenticationResponse.Exception; } } } return new EmptyResult(); } Working fine with Google, AOL and others. However, Yahoo and MyOpenID fall into the AuthenticationStatus.Failed case with the following exception: DotNetOpenAuth.Messaging.Bindings.InvalidSignatureException: Message signature was incorrect. at DotNetOpenAuth.OpenId.ChannelElements.SigningBindingElement.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\ChannelElements\SigningBindingElement.cs:line 139 at DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\Messaging\Channel.cs:line 992 at DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\ChannelElements\OpenIdChannel.cs:line 172 at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestInfo httpRequest) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\Messaging\Channel.cs:line 386 at DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestInfo httpRequestInfo) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\RelyingParty\OpenIdRelyingParty.cs:line 540 Appears that others are having the same problem: http://trac.dotnetopenauth.net:8000/ticket/172 Does anyone have a workaround?

    Read the article

  • Apk Expansion Files - Application Licensing - Developer account - NOT_LICENSED response

    - by mUncescu
    I am trying to use the APK Expansion Files extension for Android. I have uploaded the APK to the server along with the extension files. If the application was previously published i get a response from the server saying NOT_LICENSED: The code I use is: APKExpansionPolicy aep = new APKExpansionPolicy(mContext, new AESObfuscator(getSALT(), mContext.getPackageName(), deviceId)); aep.resetPolicy(); LicenseChecker checker = new LicenseChecker(mContext, aep, getPublicKey(); checker.checkAccess(new LicenseCheckerCallback() { @Override public void allow(int reason) { @Override public void dontAllow(int reason) { try { switch (reason) { case Policy.NOT_LICENSED: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_UNLICENSED); break; case Policy.RETRY: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); break; } } finally { setServiceRunning(false); } } @Override public void applicationError(int errorCode) { try { mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); } finally { setServiceRunning(false); } } }); So if the application wasn't previously published the Allow method is called. If the application was previously published and now it isn't the dontAllow method is called. I have tried: http://developer.android.com/guide/google/play/licensing/setting-up.html#test-response Here it says that if you use a developer or test account on your test device you can set a specific response, I use LICENSED as response and still get NOT_LINCESED. Resetting the phone, clearing google play store cache, application data. Changing the versioncode number in different combinations still doesn't work. Edit: In case someone else was facing this problem I received an mail from the google support team We are aware that newly created accounts for testing in-app billing and Google licensing server (LVL) return errors, and are working on resolving this problem. Please stay tuned. In the meantime, you can use any accounts created prior to August 1st, 2012, for testing. So it seems to be a problem with their server, if I use the main developer thread everything works fine.

    Read the article

  • JSF with Enum 'Validation Error: Value is not valid'

    - by Shamik
    I have an enum whose code is like this - public enum COSOptionType { NOTAPPLICABLE, OPTIONAL, MANDATORY; private String[] label = { "Not Applicable", "Optional", "Mandatory"}; @Override public String toString() { return label[this.ordinal()]; } public static COSOptionType getCOSOption(String value) { int ivalue = Integer.parseInt(value); switch(ivalue) { case 0: return NOTAPPLICABLE; case 1: return OPTIONAL; case 2: return MANDATORY; default: throw new RuntimeException("Should not get this far ever!"); } } } I have the converter to convert the enum type public class COSEnumConverter implements Converter { public Object getAsObject(FacesContext context, UIComponent comp, String value) { return COSOptionType.getCOSOption(value); } public String getAsString(FacesContext context, UIComponent comp, Object obj) { if (obj instanceof String) { return (String) obj; } COSOptionType type = (COSOptionType) obj; int index = type.ordinal(); return ""+index; } } The view looks like this <h:selectOneMenu value="#{controller.type}" id="smoking"> <f:selectItems value="#{jnyController.choices}" /> </h:selectOneMenu> Here is the code for create choices private List<SelectItem> createChoicies() { List<SelectItem> list = new ArrayList<SelectItem>(); for (COSOptionType cos : COSOptionType.values()) { SelectItem item = new SelectItem(); item.setLabel(cos.toString()); item.setValue("" + cos.ordinal()); list.add(item); } return list; } I do not understand why this would throw "validation error" all the time ? I can debug and see that the converter is working fine. NOTE: I am using JSF 1.1

    Read the article

  • Salesforce - Update/Upsert custom object entry

    - by Phill Pafford
    UPDATE: It's working as expected just needed to pass the correct Id, DUH!~ I have a custom object in salesforce, kind of like the comments section on a case for example. When you add a new comment it has a date/time stamp for that entry, I wanted to update the previous case comment date/time stamp when a new case comment is created. I wanted to do an UPDATE like this: $updateFields = array( 'Id'=>$comment_id, // This is the Id for each comment 'End_Date__c'=>$record_last_modified_date ); function sfUpdateLastCommentDate($sfConnection, $updateFields) { try { $sObjectCustom = new SObject(); $sObjectCustom->type = 'Case_Custom__c'; $sObjectCustom->fields = $updateFields; $createResponse = $sfConnection->update(array($sObjectCustom)); } catch(Exception $e) { $error_msg = SALESFORCE_ERROR." \n"; $error_msg .= $e->faultstring; $error_msg .= $sfConnection->getLastRequest(); $error_msg .= SALESFORCE_MESSAGE_BUFFER_NEWLINE; // Send error message mail(ERROR_TO_EMAIL, ERROR_EMAIL_SUBJECT, $error_msg, ERROR_EMAIL_HEADER_WITH_CC); exit; } } I've also tried the UPSERT but I get the error: Missing argument 2 for SforcePartnerClient::upsert() Any help would be great

    Read the article

  • .htaccess - RewriteRules working, but browser address bar displaying full (unfriendly) URL

    - by axol
    Hey there, Haven't been able to find a solution to this around the net or these forums - apologise if I've missed something! My .htaccess RewriteRules are working well - have search-engine and user -friendly links in my web pages, and unfriendly database URLs running hidden in the background. Except when I added a RewriteRule to add "www." to the front of URLs if the user didn't enter it - to ensure only one appears in search engines. Here's what now happening, and I can't figure out why! My friendly URL structure for content is like this, and the database query string uses the first "importantword": www.example.com/importantword-nonimportantword/ .htaccess snippet: Options +FollowSymLinks Options -Indexes RewriteEngine on RewriteOptions MaxRedirects=10 RewriteBase / RewriteRule ^/$ index.php [L] RewriteRule ^(.*)-(.*)/overview/$ detail.php?categoryID=$1 [L] RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^(.*)$ http://www.example.com/$1 [L] What's happening since I added the last 2 lines: CASE 1: user types (or clicks) www.example.com/honda-vehicle/overview/ - Works correctly - They are taken to the correct page and the browser URL bar says: www.example.com/honda-vehicles/overview/ CASE 2: user types example.com - Works correctly - They are taken to www.example.com and the browser URL bar says: www.example.com CASE 3: user types (or clicks) example.com/honda-vehicles/overview/ i.e. without the prefix "www" - Does NOT work correctly - They are taken to the right page, but the browser URL bar displays the unfriendly URL: www.example.com/detail.php?categoryID=honda I figure there's some issue with the order of the RewriteRules, but it's doing my head in trying to logically step through it and figure it out! Any assistance at all or pointers would be most appreciated!

    Read the article

  • iPhone Device 3.1 SDK Breaks vertical alignment of UITableViewCellStyleValue1 textLabel

    - by user171089
    Can anyone provide an explanation for the following phenomenon? As of the iPhone Device 3.1 SDK, I've found that if a UITableViewCell is of style UITableViewCellStyleValue1 and its detailTextLabel.text is unassigned, then the textLabel does not display in the center of the cell as would be expected. One notable caveat is that this only happens for me when I'm testing on the Device – the iPhone Simulator 3.1 SDK displays the cells correctly. Also, this is not a problem when using the iPhone Device 3.0 SDK. Below is a simple UITableView subclass implementation that demonstrates the problem. @implementation BuggyTableViewController #pragma mark Table view methods // Customize the number of rows in the table view. - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 3; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue1 reuseIdentifier:CellIdentifier] autorelease]; } switch (indexPath.row) { case 0: cell.textLabel.text = @"detailTextLabel.text unassigned"; break; case 1: cell.textLabel.text = @"detailTextLabel.text = @\"\""; cell.detailTextLabel.text = @""; break; case 2: cell.textLabel.text = @"detailTextLabel.text = @\"A\""; cell.detailTextLabel.text = @"A"; break; default: break; } return cell; } @end

    Read the article

  • Determining the maximum stack depth

    - by Joa Ebert
    Imagine I have a stack-based toy language that comes with the operations Push, Pop, Jump and If. I have a program and its input is the toy language. For instance I get the sequence Push 1 Push 1 Pop Pop In that case the maximum stack would be 2. A more complicated example would use branches. Push 1 Push true If .success Pop Jump .continue .success: Push 1 Push 1 Pop Pop Pop .continue: In this case the maximum stack would be 3. However it is not possible to get the maximum stack by walking top to bottom as shown in this case since it would result in a stack-underflow error actually. CFGs to the rescue you can build a graph and walk every possible path of the basic blocks you have. However since the number of paths can grow quickly for n vertices you get (n-1)! possible paths. My current approach is to simplify the graph as much as possible and to have less possible paths. This works but I would consider it ugly. Is there a better (read: faster) way to attack this problem? I am fine if the algorithm produces a stack depth that is not optimal. If the correct stack size is m then my only constraint is that the result n is n = m. Is there maybe a greedy algorithm available that would produce a good result here?

    Read the article

  • Using the Salesforce PHP API to generate a User Profile Report

    - by Phill Pafford
    Hi All, Looking to do a security audit of all user permissions. I think I can use the Salesforce PHPToolkit 11 API to generate the report but new to Salesforce and a little confused on where to start. In Salesforce Setup Under: Administration Setup -> Manage Users -> Profiles -> Profile Names If you click on each user name you can see the permissions set and the actions the user is allowed to perform. Wanted a way to generate an excel report for all users with all the permissions for that user. Example: User Name | Can view Case | Can edit case | Can delete case | etc... phill yes no no x... and so on. I see that in Salesforce I can run a high level report on the Profile but I need to drill down for each user. Has anyone every done this type of reporting before? any help on this would be great. Thanks in advacne, --Phill

    Read the article

  • Reinforcement learning toy project

    - by Betamoo
    My toy project to learn & apply Reinforcement Learning is: - An agent tries to reach a goal state "safely" & "quickly".... - But there are projectiles and rockets that are launched upon the agent in the way. - The agent can determine rockets position -with some noise- only if they are "near" - The agent then must learn to avoid crashing into these rockets.. - The agent has -rechargable with time- fuel which is consumed in agent motion - Continuous Actions: Accelerating forward - Turning with angle I need some hints and names of RL algorithms that suit that case.. - I think it is POMDP , but can I model it as MDP and just ignore noise? - In case POMDP, What is the recommended way for evaluating probability? - Which is better to use in this case: Value functions or Policy Iterations? - Can I use NN to model environment dynamics instead of using explicit equations? - If yes, Is there a specific type/model of NN to be recommended? - I think Actions must be discretized, right? I know it will take time and effort to learn such a topic, but I am eager to.. You may answer some of the questions if you can not answer all... Thanks

    Read the article

  • AccessViolationException from a combo: Attempted to read or write protected memory

    - by Sparky
    Users are occassionally getting the above error when using our application (VB.Net, Winforms, using v2 of the framework). I'm not able to reproduce it. The callstack is as follows: : System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Windows.Forms.UnsafeNativeMethods.CallWindowProc(IntPtr wndProc, IntPtr hWnd, Int32 msg, IntPtr wParam, IntPtr lParam) at System.Windows.Forms.NativeWindow.DefWndProc(Message& m) at System.Windows.Forms.Control.DefWndProc(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ComboBox.WndProc(Message& m) at ControlEx.AutoCompleteCombo.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) The code for ControlEx.AutoCompleteCombo.WndProc is as follows: Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message) Try If Not m_fReadOnly Then MyBase.WndProc(m) Else Select Case m.Msg Case WM_LBUTTONDOWN, WM_LBUTTONDBLCLK ' do nothing Case Else MyBase.WndProc(m) End Select End If Catch ex As OutOfMemoryException Throw New OutOfMemoryException("Exception during WndProc for combo " & Me.Name, ex) End Try End Sub The error handling was added so we can determine which combo causes the problem when we get an OutOfMemoryException. Any clues as to what causes this would be muchly appreciated! :-)

    Read the article

  • SQLCMD.EXE generates ugly report. How to format it?

    - by Juri Bogdanov
    I did batch to run SQL query like use [AxDWH_Central_Reporting] GO EXEC sp_spaceused @updateusage = N'TRUE' GO It displays 2 tables and generates some ugly report with some kind of 'P' unneeded letters... See below Changed database context to 'AxDWH_Central_Reporting'. database_name Pdatabase_size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ AxDWH_Central_Reporting P10485.69 MB P7436.85 MB reserved Pdata Pindex_size Punused ------------------P------------------P------------------P------------------ 3121176 KB P3111728 KB P7744 KB P1704 KB ---------------------------------------------------------------- I also tryed to generate 1 table from this procedure with next query declare @dbname sysname, @dbsize bigint, @logsize bigint, @reservedpages bigint select @reservedpages = sum(a.total_pages) from sys.partitions p join sys.allocation_units a on p.partition_id = a.container_id left join sys.internal_tables it on p.object_id = it.object_id select @dbsize = sum(convert(bigint,case when status & 64 = 0 then size else 0 end)), @logsize = sum(convert(bigint,case when status & 64 <> 0 then size else 0 end)) from dbo.sysfiles select 'database name' = db_name(), 'database size' = ltrim(str((convert (dec (15,2),@dbsize) + convert (dec (15,2),@logsize)) * 8192 / 1048576,15,2) + ' MB'), 'unallocated space' = ltrim(str((case when @dbsize >= @reservedpages then (convert (dec (15,2),@dbsize) - convert (dec (15,2),@reservedpages)) * 8192 / 1048576 else 0 end),15,2) + ' MB') But got similar ugly report: database name Pdatabase size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ master P5.75 MB P1.52 MB (1 rows affected) Is it possible to change the layout formatting for report? To make it more beautifull?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >