Search Results

Search found 12325 results on 493 pages for 'remote execution'.

Page 29/493 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Remote stream multiple files in SOLR

    - by Mark
    I want to use SOLR's remote-streaming facility to extract and index the content of files. This works fine if I pass stream.file=xxx as a parameter to the http GET method. However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file). Is there a way I can do this in SOLR? e.g. I'd like to be able to POST some xml like this: <add> <doc stream_file="filename"> <field name="id">123</field> </doc> <doc>...

    Read the article

  • Remote Seam Persistence

    Hi. I have a button in a .xhtml file which calls a javascript function which calls a java function remotely (in jboss seam environment). That java function has an entityManager.persist(object). Do you know why this line of code doesn't commit to the DB? It says something that a transaction hasn't started. I supose in a remote context i don't have a transaction began because if i put an action on that button which calls the same java function instead of using javascript is above, it works fine; entityManager persists the object and i can see it in the DB. Does anyone has any ideas how could i make to actually persist the object using javascript to call the java function? (i have to use javascript because i need the callback function )

    Read the article

  • Open Folder within ClearCase Remote Client using Windows Explorer

    - by sammy
    Is there a way to open the folder location of a file from within CCRC? While I know I can open/copy from directly within CCRC, it is often useful to work directly with the file from within Windows Explorer. I am looking for something like "open file location" or "open in windows explorer". The folder within CCRC does not appear to allow opening it directly as the double-mouse-click action just expands the tree listing. The path is listed/copyable within the "ClearCase Details" tab, but I am trying to take my laziness to a whole new level by being able to open the folder with a single click. Any ideas if this is a feature available and where I can find it? Thanks. Info: Rational ClearCase Remote Client 7.1.1 Windows 7

    Read the article

  • form_for [@parent,@son],:remote=>true not asking for JS

    - by Cibernox
    Hi. I have a plain old form. That form is used to create new objects of a nested model. #restaurant.rb has_many :courses #courses.rb belongs_to :restaurant #routes.rb resources :restaurants do resources :courses end In my views(in haml), i have that code: %li.course{'data-random'=>random} = form_for([restaurant,course], :remote=>true) do |f| .name= f.text_field :name, :placeholder=>'Name here' .cat= f.hidden_field :category .price= f.text_field :price,:placeholder=>'Price here' .save = hidden_field_tag :random,random = f.submit "Save" I espected that form to be answered by action create of courses_controller with JS (create.js.erb), but it is submited like a normal form, and is answered with html. What am I doing wrong? This problem is similar to this but the only answer don't make sense to me. Thanks Inside

    Read the article

  • Powershell - remote folder availability while counting files

    - by ziklop
    I´m trying to make a Powershell script that reports if there´s a file older than x minutes on remote folder. I make this: $strfolder = 'folder1 ..................' $pocet = (Get-ChildItem \\server1\edi1\folder1\*.* ) | where-object {($_.LastWriteTime -lt (Get-Date).AddDays(-0).AddHours(-0).AddMinutes(-20))} | Measure-Object if($pocet.count -eq 0){Write-Host $strfolder "OK" -foreground Green} else {Write-Host $strfolder "ERROR" -foreground Red} But there´s one huge problem. The folder is often unavailable for me bacause of the high load and I found out when there is no connection it doesn´t report an error but continues with zero in $pocet.count. It means it reports everything is ok when the folder is unavailable. I was thinking about using if(Test-Path..) but what about it became unavailable just after passing Test-Path? Does anyone has a solution please? Thank you in advance

    Read the article

  • Git: How do I rewind the Master branch on the remote origin

    - by user277260
    I made 5 commits to Master branch when bug hunting on a private project and pushed them to the remote origin (my own private vps). Then I saw that commits 4 and 5 were going to cause trouble elsewhere and I need to undo them, so I checked out commit 3 again, made a new branch "Dev" from that point, and did a few more commits fixing the issue properly. Then I did git reset --hard HEAD~2 on Master to pull it back to the point that I branched Dev. Then I did git merge to fast forward Master back to the end of the Dev branch. So now I have a local repository, with Dev and Master both pointing to the same, up to date version of the project with the latest bug fix. Problem is, when I try to push the project now to the origin, it fails and gives me an error message: ! [rejected] master - master (non-fast forward) error: failed to push some refs to 'myserver...myproject.git' What have I done wrong, and how do I fix it? Thanks

    Read the article

  • Drupal install on remote mysql

    - by user1448660
    I am trying to install drupal on remote mysql server. I have created the user in mysql and granted the the privileges. I am able to connect through command line from my web server like this "mysql -u xxxx -h 10.xxx.yy.zz3 -p". But when I tried to install drupal I get "SQLSTATE[28000] [1045] Access denied for user 'xxxx'@'localhost'". I have given the privileges for "xxxx"@"10.xxx.yy.zz3" but drupal appends localhost instead of IP to user name. I have changed settings.php to mysql server IP. What am I missing?

    Read the article

  • Run a remote python script from ASP.Net

    - by Jaelebi
    I have a python script on a linux server that I can SSH into and I want to run the script on the linux server( and pass it parameters entered by the user) and get the output on an ASP.net webpage running on IIS. How would I be able to do that? Would it be easier if I was running a wamp server? Edit: The servers are in the same internal intranet.

    Read the article

  • Uploading to a remote server periodically?

    - by user1048138
    I have been working on an app that takes screen shots, kinda like http://puush.me/ however, I would like to be able to upload the screen shots to a remote server. What protocols can I use to do so. Needs to be cross platform and secure. I know that SSH, SFTP and FTP are options, however, they all require logins that I dont want to provide to the end user. Nor do I want to sign a key for them as it would still allow their machines to remotely log in.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • WMI to reboot remote machine

    - by Stephen Murby
    I found this code on an old thread to shutdown the local machine: using System.Management; void Shutdown() { ManagementBaseObject mboShutdown = null; ManagementClass mcWin32 = new ManagementClass("Win32_OperatingSystem"); mcWin32.Get(); // You can't shutdown without security privileges mcWin32.Scope.Options.EnablePrivileges = true; ManagementBaseObject mboShutdownParams = mcWin32.GetMethodParameters("Win32Shutdown"); // Flag 1 means we want to shut down the system. Use "2" to reboot. mboShutdownParams["Flags"] = "1"; mboShutdownParams["Reserved"] = "0"; foreach (ManagementObject manObj in mcWin32.GetInstances()) { mboShutdown = manObj.InvokeMethod("Win32Shutdown", mboShutdownParams, null); } } Is it possible to use a similar WMI method to reboot flag"2" a remote machine, for which i only have machine name, not IPaddress. EDIT: I currently have; SearchResultCollection allMachinesCollected = machineSearch.FindAll(); Methods myMethods = new Methods(); string pcName; ArrayList allComputers = new ArrayList(); foreach (SearchResult oneMachine in allMachinesCollected) { //pcName = oneMachine.Properties.PropertyNames.ToString(); pcName = oneMachine.Properties["name"][0].ToString(); allComputers.Add(pcName); MessageBox.Show(pcName + "has been sent the restart command."); Process.Start("shutdown.exe", "-r -f -t 0 -m \" + pcName); } but this doesn't work, and i would prefer WMI going forward.

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • perl: Run remote perl script through SSH and query environment variables on remote machine

    - by kakyo
    I'm running a perl script through SSH, in the perl script I query environment variables using $ENV{MY_VAR_NAME} and it works fine when run locally. But through SSH, all environment variables become unset. I also tried to run system("source ~/.bash_profile"); at the beginning of my script to no avail. Any tips? EDIT: Rephrasing my question. I have machine A and B. I ran my perl on machine B, trying to get the environment variables on B and it worked. Then I ssh from A to B running the same script, i.e., using this code ssh user@B perl myscript.pl This time the environment variables on B are all blank. Any tips? UPDATE: I found that running the above script, ~/.bashrc on Machine B was invoked, but after setting environment variables in ~/.bashrc, run the above command again and still I don't see any environment variables. Also, if my perl script contains only echo $ENV{PATH} Then I get /usr/bin:/bin:/usr/sbin:/sbin

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • Ajax doesn't work on remote server .

    - by Nuha
    Hello . when I Implemented chatting Function , I use Ajax to send messages between file to another . so , it is working well on local host . but , when I upload it in to remote server it doesn't work. can U tell me ,why ? is an Ajax need Special configuration ? Ajax code : function Ajax_Send(GP,URL,PARAMETERS,RESPONSEFUNCTION){? var xmlhttp? try{xmlhttp=new ActiveXObject("Msxml2.XMLHTTP")}? catch(e){? try{xmlhttp=new ActiveXObject("Microsoft.XMLHTTP")}? catch(e){? try{xmlhttp=new XMLHttpRequest()}? catch(e){? alert("Your Browser Does Not Support AJAX")}}}? ? err=""? if (GP==undefined) err="GP "? if (URL==undefined) err +="URL "? if (PARAMETERS==undefined) err+="PARAMETERS"? if (err!=""){alert("Missing Identifier(s)\n\n"+err);return false;}? ? xmlhttp.onreadystatechange=function(){? if (xmlhttp.readyState == 4){? if (RESPONSEFUNCTION=="") return false;? eval(RESPONSEFUNCTION(xmlhttp.responseText))? }? }? ? if (GP=="GET"){? URL+="?"+PARAMETERS? xmlhttp.open("GET",URL,true)? xmlhttp.send(null)? }? ? if (GP="POST"){? PARAMETERS=encodeURI(PARAMETERS)? xmlhttp.open("POST",URL,true)? xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded")? xmlhttp.setRequestHeader("Content-length",PARAMETERS.length)? xmlhttp.setRequestHeader("Connection", "close")? xmlhttp.send(PARAMETERS)? }? }

    Read the article

  • Remote Postgresql - extremely slow

    - by Muffinbubble
    Hi, I have setup PostgreSQL on a VPS I own - the software that accesses the database is a program called PokerTracker. PokerTracker logs all your hands and statistics whilst playing online poker. I wanted this accessible from several different computers so decided to installed it on my VPS and after a few hiccups I managed to get it connecting without errors. However, the performance is dreadful. I have done tons of research on 'remote postgresql slow' etc and am yet to find an answer so am hoping someone is able to help. Things to note: The query I am trying to execute is very small. Whilst connecting locally on the VPS, the query runs instantly. While running it remotely, it takes about 1 minute and 30 seconds to run the query. The VPS is running 100MBPS and then computer I'm connecting to it from is on an 8MB line. The network communication between the two is almost instant, I am able to remotely connect fine with no lag whatsoever and am hosting several websites running MSSQL and all the queries run instantly, whether connected remotely or locally so it seems specific to PostgreSQL. I'm running their newest version of the software and the newest compatible version of PostgreSQL with their software. The database is a new database, containing hardly any data and I've ran vacuum/analyze etc all to no avail, I see no improvements. I don't understand how MSSQL can query almost instantly yet PostgreSQL struggles so much. I am able to telnet to the post 5432 on the VPS IP with no problems, and as I say the query does execute it just takes an extremely long time. What I do notice is on the router when the query is running that hardly any bandwidth is being used - but then again I wouldn't expect it to for a simple query but am not sure if this is the issue. I've tried connecting remotely on 3 different networks now (including different routers) but the problem remains. Connecting remotely via another machine via the LAN is instant. I have also edited the postgre conf file to allow for more memory/buffers etc but I don't think this is the problem - what I am asking it to do is very simple - it shouldn't be intensive at all. Thanks, Ricky

    Read the article

  • Simple Query tuning with STATISTICS IO and Execution plans

    A great deal can be gleaned from the use of the STATISTICS IO and the execution plan, when you are checking that a query is performing properly. Josef Richberg, the current holder of the 'Exceptional DBA' award, explains how an apparently draconian IT policy turns out to be a useful ways of ensuring that Stored Procedures are carefully checked for performance before they are released

    Read the article

  • CVE-2012-1182 Arbitrary code execution vulnerability in Samba

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1182 Arbitrary code execution vulnerability 10 Samba Solaris 10 SPARC: 119757-22 x86: 119758-22 Solaris 11 11/11 SRU 7.5 Solaris 9 SPARC: 114684-18 x86: 114685-18 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • CVE-2012-1714 TList 6 ActiveX control remote code execution vulnerability in Hyperion Financial Management

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1714 Remote code execution vulnerability 10 TList 6 ActiveX control Hyperion Financial Management 11.1.1.4 Contact Support Hyperion Financial Management 11.1.2.1.104 Microsoft Windows (32-bit) Microsoft Windows (64-bit) This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How do I make this ad execution?

    - by Maggie
    I am doing research on replicating an ad execution - http://www.digitalbuzzblog.com/gol-airlines-mobile-controlled-banner-game/ It's a simple "game" involving using the phone as a forward/back/left/right controller for a car in flash on the internet. I've started reading on P2P, but I'm finding such a vast amount of information and non specific to what I need that it's hard for me to sort through. Does anyone know any tutorials or can shed some light on how I might go about making a very simple mobile controller for a flash game?

    Read the article

  • Exporting Execution Plans - SQL Spackle

    A short SQL Spackle article to fill in your knowledge of SQL Server. In this one, Jason Brimhall shows how to export execution plans when you ask for query tuning help. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • CVE-2012-4245 Arbitrary code execution vulnerability in Gimp

    - by Umang_D
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-4245 Arbitrary code execution vulnerability 6.8 Gimp Solaris 11 11/11 SRU 12.4 Solaris 10 Contact Support This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >