Search Results

Search found 34308 results on 1373 pages for 'identity and access governance'.

Page 314/1373 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Is this a DNS or server-side error?

    - by joshlfisher
    I am having difficulty accessing a specific website. (I get 500 Server fault errors) I can access this site on my iPhone when NOT connected to WiFi. I CANNOT access the site when connected to WiFi or via a Ethernet connection to my home network. I thought it might be a DNS issue, so I copied the DNSservers from a friend who has a different ISP, and has no problem access the site. No luck. Also tried some of the public DNS servers out there, again, with no luck. Does anyone have any idea on how to trace this issue?

    Read the article

  • Stairway to XML: Level 3 - Working with Typed XML

    You can enforce the validation of an XML data type, variable or column by associating it with an XML Schema Collection. SQL Server validates a typed XML value against the rules defined in the schema collection so that INSERT or UPDATE operations will succeed only if the value being inserted or updated is valid as per the rules defined in the Schema Collection. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • Problem in Addon Domain name binding with existing directory in my webspace

    - by articlestack
    I recently purchased a domain thinkzarahatke.com. At the time of purchasing i selected an option, say url redirect, to my existing site. Further I had entered Naming server detail. But I dint find any other option to change URL rewrite parameters under Domain Name Manager. From the cPanel of my hosting space, I had added new domain as addon and set a sundirectory for it. Now If I am trying to access my new domain directly, it is automatically redirecting to my old website. While if i access some inside directory, like thinkzarahatke.com/blog, then i am able to access to my new site. What wrong i done? Please tell me if i need to do some more settings with addon domain.

    Read the article

  • How disable mysql command in sudoers file?

    - by Carlos A. Junior
    How i can disable /usr/bin/mysql command in sudoers file ? ... Actually I've tryed use with this way: %tailonly ALL=!/usr/bin/mysql But when i'm access if user 'tailonly' of group 'tailonly', this command still enabled. In resume, i'm only want that 'tailonly' user access 'tail -f /usr/app/*.log' ... This is possible ? Edit: With this config, the user 'tailonly' still can access mysql terminal with 'mysql' command: $: sudo su $: visudo Cmnd_Alias MYSQL = /usr/bin/mysql Cmnd_Alias TAIL=/usr/bin/tail -f /jacad/jacad3/logs/*.log # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL %swa ALL=/etc/init.d/jacad3 stop %swa ALL=/etc/init.d/jacad3 start %swa ALL=/etc/init.d/jacad3 restart %swa ALL=sudoedit /jacad/jacad3/bin/jacad_start.sh %tailonly ALL=ALL,!MYSQL

    Read the article

  • How these files can be accessed?

    - by harsh.singla
    The files can be accessed from every artifact, such as .bpel, .mplan, .task, .xsl, .wsdl etc., of the composite. 'oramds' protocol is used to access these files. You need to setup your adf-config.xml file in your dev environment or Jdeveloper to access these files from MDS. Here is the sample adf-config.xml. xmlns:sec="http://xmlns.oracle.com/adf/security/config" name="jdbc-url"/ name="metadata-path"/ credentialStoreLocation="../../src/META-INF/jps-config.xml"/ This adf-config.xml is located in directory named .adf/META-INF, which is in the application home of your project. Application home is the directory where .jws file of you application exists. Other than setting this file, you need not make any other changes in your project or composite to access MDS. After setting this up, you can create a new SOA-MDS connection in your Jdev. This enables you to have a resource pallet in which you can browse and choose the required file from MDS.

    Read the article

  • How to ensure apache2 reads htaccess for custom expiry?

    - by tzot
    I have a site with Apache 2.2.22 . I have enabled the mod-expires and mod-headers modules seemingly correctly: $ apachectl -t -D DUMP_MODULES … expires_module (shared) headers_module (shared) … Settings include: ExpiresActive On ExpiresDefault "access plus 10 minutes" ExpiresByType application/xml "access plus 1 minute" Checking the headers of requests, I see that max-age is set correctly both for the generic case and for xml files (which are auto-generated, but mostly static). I would like to have different expiries for xml files in a directory (e.g. /data), so http://site/data/sample.xml expires 24 hours later. I enter the following in data/.htaccess: ExpiresByType application/xml "access plus 24 hours" Header set Cache-control "max-age=86400, public" but it seems that apache ignores this. How can I ensure apache2 uses the .htaccess directives? I can provide further information if requested.

    Read the article

  • Broadcom BCM4313 wireless slow and high-latency

    - by Florin Andrei
    Ubuntu 12.10 64 bit on a Dell Latitude E6330 laptop. Wireless is pretty slow. It gets connected quick enough, but then it acts like a dialup connection. My ssh sessions over WiFi are slow and laggy. Even browsing is slow, the pages are loading like it's 1998. This does not depend on the access point, it's the same both at home and at work. Other systems work fine on these access points. I had an older Dell laptop before, different WiFi hardware, and it was much faster over the same wireless access points. Is this a known issue with this hardware? If so, any solutions?

    Read the article

  • Multiple vulnerabilities in Thunderbird

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1948 Denial of service (DoS) vulnerability 9.3 Thunderbird Solaris 10 SPARC: 145200-12 X86: 145201-12 CVE-2012-1950 Address spoofing vulnerability 6.4 CVE-2012-1951 Resource Management Errors vulnerability 10.0 CVE-2012-1952 Resource Management Errors vulnerability 9.3 CVE-2012-1953 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 9.3 CVE-2012-1954 Resource Management Errors vulnerability 10.0 CVE-2012-1955 Address spoofing vulnerability 6.8 CVE-2012-1957 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-1958 Resource Management Errors vulnerability 9.3 CVE-2012-1959 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2012-1961 Improper Input Validation vulnerability 4.3 CVE-2012-1962 Resource Management Errors vulnerability 10.0 CVE-2012-1963 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-1964 Clickjacking vulnerability 4.0 CVE-2012-1965 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-1966 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-1967 Arbitrary code execution vulnerability 10.0 CVE-2012-1970 Denial of service (DoS) vulnerability 10.0 CVE-2012-1973 Resource Management Errors vulnerability 10.0 CVE-2012-3966 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How can I install wireless drivers without internet?

    - by Ruben
    [Ubuntu 12.04 LTS] I have a HP Pavilion dv6 and I need a Broadcom driver (closed source) to access the internet. However, I need to download that, which I am unable to do because I do not have internet access. My ethernet port has always been broken and I have not been able to access the internet since I installed Ubuntu. I need to find a way for me to install those drivers desperately. I still have Ubuntu on my USB, which for some reason, did have the ability to install that driver (I think it's due to the fact that it already has it somewhere in the files). On my USB Ubuntu, I have that particular driver installed. I was thinking that if one of you knows where drivers are installed, I could locate those files on the USB Ubuntu, then plug in an additional USB to copy them; restart my computer to the harddrive Ubuntu and then install the files from the (additional) USB. I would really appreciate help since to me a computer without internet is useless.

    Read the article

  • How do I store the OAuth v1 consumer key and secret for an open source desktop Twitter client without revealing it to the user?

    - by Justin Dearing
    I want to make a thick-client, desktop, open source twitter client. I happen to be using .NET as my language and Twitterizer as my OAuth/Twitter wrapper, and my app will likely be released as open source. To get an OAuth token, four pieces of information are required: Access Token (twitter user name) Access Secret (twitter password) Consumer Key Consumer Secret The second two pieces of information are not to be shared, like a PGP private key. However, due to the way the OAuth authorization flow is designed, these need to be on the native app. Even if the application was not open source, and the consumer key/secret were encrypted, a reasonably skilled user could gain access to the consumer key/secret pair. So my question is, how do I get around this problem? What is the proper strategy for a desktop Twitter client to protect its consumer key and secret?

    Read the article

  • Ransomware: Why This New Malware is So Dangerous and How to Protect Yourself

    - by Chris Hoffman
    Ransomware is a type of malware that tries to extort money from you. One of the nastiest examples, CryptoLocker, takes your files hostage and holds them for ransom, forcing you to pay hundreds of dollars to regain access. Most malware is no longer created by bored teenagers looking to cause some chaos. Much of the current malware is now produced by organized crime for profit and is becoming increasingly sophisticated. How Ransomware Works Not all ransomware is identical. The key thing that makes a piece of malware “ransomware” is that it attempts to extort a direct payment from you. Some ransomware may be disguised. It may function as “scareware,” displaying a pop-up that says something like “Your computer is infected, purchase this product to fix the infection” or “Your computer has been used to download illegal files, pay a fine to continue using your computer.” In other situations, ransomware may be more up-front. It may hook deep into your system, displaying a message saying that it will only go away when you pay money to the ransomware’s creators. This type of malware could be bypassed via malware removal tools or just by reinstalling Windows. Unfortunately, Ransomware is becoming more and more sophisticated. One of the latest examples, CryptoLocker, starts encrypting your personal files as soon as it gains access to your system, preventing access to the files without knowing the encryption key. CryptoLocker then displays a message informing you that your files have been locked with encryption and that you have just a few days to pay up. If you pay them $300, they’ll hand you the encryption key and you can recover your files. CryptoLocker helpfully walks you through choosing a payment method and, after paying, the criminals seem to actually give you a key that you can use to restore your files. You can never be sure that the criminals will keep their end of the deal, of course. It’s not a good idea to pay up when you’re extorted by criminals. On the other hand, businesses that lose their only copy of business-critical data may be tempted to take the risk — and it’s hard to blame them. Protecting Your Files From Ransomware This type of malware is another good example of why backups are essential. You should regularly back up files to an external hard drive or a remote file storage server. If all your copies of your files are on your computer, malware that infects your computer could encrypt them all and restrict access — or even delete them entirely. When backing up files, be sure to back up your personal files to a location where they can’t be written to or erased. For example, place them on a removable hard drive or upload them to a remote backup service like CrashPlan that would allow you to revert to previous versions of files. Don’t just store your backups on an internal hard drive or network share you have write access to. The ransomware could encrypt the files on your connected backup drive or on your network share if you have full write access. Frequent backups are also important. You wouldn’t want to lose a week’s worth of work because you only back up your files every week. This is part of the reason why automated back-up solutions are so convenient. If your files do become locked by ransomware and you don’t have the appropriate backups, you can try recovering them with ShadowExplorer. This tool accesses “Shadow Copies,” which Windows uses for System Restore — they will often contain some personal files. How to Avoid Ransomware Aside from using a proper backup strategy, you can avoid ransomware in the same way you avoid other forms of malware. CryptoLocker has been verified to arrive through email attachments, via the Java plug-in, and installed on computers that are part of the Zeus botnet. Use a good antivirus product that will attempt to stop ransomware in its tracks. Antivirus programs are never perfect and you could be infected even if you run one, but it’s an important layer of defense. Avoid running suspicious files. Ransomware can arrive in .exe files attached to emails, from illicit websites containing pirated software, or anywhere else that malware comes from. Be alert and exercise caution over the files you download and run. Keep your software updated. Using an old version of your web browser, operating system, or a browser plugin can allow malware in through open security holes. If you have Java installed, you should probably uninstall it. For more tips, read our list of important security practices you should be following. Ransomware — CryptoLocker in particular — is brutally efficient and smart. It just wants to get down to business and take your money. Holding your files hostage is an effective way to prevent removal by antivirus programs after it’s taken root, but CryptoLocker is much less scary if you have good backups. This sort of malware demonstrates the importance of backups as well as proper security practices. Unfortunately, CryptoLocker is probably a sign of things to come — it’s the kind of malware we’ll likely be seeing more of in the future.     

    Read the article

  • Problems, connecting Android ICS to Ubuntu using MTP

    - by ubuntico
    I've followed this tutorial from this blog which very clearly explains how to connect Android phone with ICS to Ubuntu so that one can access phone's sdcard (MTP access). I passed all the procedure with no errors, I can event attach my mobile to ubuntu via mtpfs -o allow_other ~/Android/GalaxyS2 and disconnect via fusermount -u ~/Android/GalaxyS2 The problem comes when I try to access mounted directory. If I try to do it via Nautilus, the system tries to open the folder for a couple of minutes and then, I either see the error, or the folder disappears from Nautilus (it comes back when I disconnect the path). I also get a console error: fuse: bad mount point `~/Android/GalaxyS2': Transport endpoint is not connected I see many people on the net reporting this error, but noone offers any solution to it. I use Ubuntu 11.10 with Gnome Shell (Gnome 3) and the mobile is Samsung Galaxy S II. I am in the fuse list, I did all the steps in the tutorial for dozens of times, all in vain.

    Read the article

  • OPN Developer Services for Solaris Developers

    - by user13333379
    Independent Software Vendors (ISVs) who develop applications for Solaris 11 can exploit a number of interesting services as long as they are OPN Members with a Gold (or above) status and a Solaris Knowledge specialization: Free access to a Solaris development cloud with preconfigured Solaris developer zones through the apply for the: Oracle Exastack Remote Labs to get free access to Solaris development environments for SPARC and x86. Free access to patches and support information through MOS for Oracle Solaris, Oracle Solaris Studio, Oracle Solaris Cluster including updates for development systems  apply for the Oracle Solaris Development Initiative. Free email developer support for all questions around Oracle Solaris, Oracle Solaris Studio, Oracle Solaris Cluster and Oracle technologies integrating with Solaris 11 apply for the Solaris Adoption Technical Assistance.  

    Read the article

  • how to retain the animated position in opengl es 2.0

    - by Arun AC
    I am doing frame based animation for 300 frames in opengl es 2.0 I want a rectangle to translate by +200 pixels in X axis and also scaled up by double (2 units) in the first 100 frames Then, the animated rectangle has to stay there for the next 100 frames. Then, I want the same animated rectangle to translate by +200 pixels in X axis and also scaled down by half (0.5 units) in the last 100 frames. I am using simple linear interpolation to calculate the delta-animation value for each frame. Pseudo code: The below drawFrame() is executed for 300 times (300 frames) in a loop. float RectMVMatrix[4][4] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; // identity matrix int totalframes = 300; float translate-delta; // interpolated translation value for each frame float scale-delta; // interpolated scale value for each frame // The usual code for draw is: void drawFrame(int iCurrentFrame) { // mySetIdentity(RectMVMatrix); // comment this line to retain the animated position. mytranslate(RectMVMatrix, translate-delta, X_AXIS); // to translate the mv matrix in x axis by translate-delta value myscale(RectMVMatrix, scale-delta); // to scale the mv matrix by scale-delta value ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } The above code will work fine for first 100 frames. in order to retain the animated rectangle during the frames 101 to 200, i removed the "mySetIdentity(RectMVMatrix);" in the above drawFrame(). Now on entering the drawFrame() for the 2nd frame, the RectMVMatrix will have the animated value of first frame e.g. RectMVMatrix[4][4] = { 1.01, 0, 0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };// 2 pixels translation and 1.01 units scaling after first frame This RectMVMatrix is used for mytranslate() in 2nd frame. The translate function will affect the value of "RectMVMatrix[0][0]". Thus translation affects the scaling values also. Eventually output is getting wrong. How to retain the animated position without affecting the current ModelView matrix? =========================================== I got the solution... Thanks to Sergio. I created separate matrices for translation and scaling. e.g.CurrentTranslateMatrix[4][4], CurrentScaleMatrix[4][4]. Then for every frame, I reset 'CurrentTranslateMatrix' to identity and call mytranslate( CurrentTranslateMatrix, translate-delta, X_AXIS) function. I reset 'CurrentScaleMatrix' to identity and call myscale(CurrentScaleMatrix, scale-delta) function. Then, I multiplied these 'CurrentTranslateMatrix' and 'CurrentScaleMatrix' to get the final 'RectMVMatrix' Matrix for the frame. Pseudo Code: float RectMVMatrix[4][4] = {0}; float CurrentTranslateMatrix[4][4] = {0}; float CurrentScaleMatrix[4][4] = {0}; int iTotalFrames = 300; int iAnimationFrames = 100; int iTranslate_X = 200.0f; // in pixels float fScale_X = 2.0f; float scaleDelta; float translateDelta_X; void DrawRect(int iTotalFrames) { mySetIdentity(RectMVMatrix); for (int i = 0; i< iTotalFrames; i++) { DrawFrame(int iCurrentFrame); } } void getInterpolatedValue(int iStartFrame, int iEndFrame, int iTotalFrame, int iCurrentFrame, float *scaleDelta, float *translateDelta_X) { float fDelta = float ( (iCurrentFrame - iStartFrame) / (iEndFrame - iStartFrame)) float fStartX = 0.0f; float fEndX = ConvertPixelsToOpenGLUnit(iTranslate_X); *translateDelta_X = fStartX + fDelta * (fEndX - fStartX); float fStartScaleX = 1.0f; float fEndScaleX = fScale_X; *scaleDelta = fStartScaleX + fDelta * (fEndScaleX - fStartScaleX); } void DrawFrame(int iCurrentFrame) { getInterpolatedValue(0, iAnimationFrames, iTotalFrames, iCurrentFrame, &scaleDelta, &translateDelta_X) mySetIdentity(CurrentTranslateMatrix); myTranslate(RectMVMatrix, translateDelta_X, X_AXIS); // to translate the mv matrix in x axis by translate-delta value mySetIdentity(CurrentScaleMatrix); myScale(RectMVMatrix, scaleDelta); // to scale the mv matrix by scale-delta value myMultiplyMatrix(RectMVMatrix, CurrentTranslateMatrix, CurrentScaleMatrix);// RectMVMatrix = CurrentTranslateMatrix*CurrentScaleMatrix; ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } I maintained this 'RectMVMatrix' value, if there is no animation for the current frame (e.g. 101th frame onwards). Thanks, Arun AC

    Read the article

  • Issues with web hosting at home

    - by hari
    I want to host a small personal website at home. One basic problem I am hitting is, From inside home network, I cannot access my domain name. I have to use the local ip (something like 192.168.1.4) to access the website. This ip is the desktop which is hosting the website. Because of this mapping, I have issues setting up a simple wordpress blog on it too. How do I get past this issue? edit:0 when I try to access www.example.com (my domain) from within my home network, I get redirected to my router login. PS: 1) I am using dyndns service to map my non-static ip to my domain name. 2) My portforwarding works fine.

    Read the article

  • How to sync calendar with android without google?

    - by YSN
    Hi folks, is there a way to sync an Ubuntu calendar application like Thunderbird Lightning or Evolution with an Android device without using google-calendar? At the moment I am syncing my Thunderbird-Lightning calendars on different computers via Dropbox, what is much more reliable than google-calendar. Another big advantage over google-calendar is, that I can access my appointments offline as well, since the calendar files are synced onto the harddrive of each computer by Dropbox. I'd like to access those calendars via my android device as well. The Dropbox-app for android does not support automatic syncing yet, so it seems like I have to use another service. Apart from that I guess I need to know an android app, that can access a calendar-file stored in ics-format. Thanks in advance YSN

    Read the article

  • Iptables Issue can't SSH Remote Machines

    - by Lonston
    I want to SSH to 192.168.1.15 Server from my machine, my ip was 192.168.1.99 Source Destination was UP, with IP 192.168.1.15. This is LAN Network there are 30 Machine's Connected to the network and working fine, I'm Playing around the local machine's cos i need to apply the same rules in Production VPS I have applied the below iptables in my machine 192.168.1.99, Now i can't receive any packets from Outside and i can't send any packets Outside, While applying the Below Chain iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP After the above CHAIN i have added the Below rules and it want to allow ssh from machine to 192.168.1.15 to access the 192.164.1.15 but still i can't access 192.168.1.15 iptables -A INPUT -p tcp -i eth0 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -o eth0 --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT Any one Please Check Weather my Rules are Wrigt. Still i can't access the machine 15

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application (the Details)

    - by Your DisplayName here!
    The scenario described in my last post works because of the design around HTTP modules in ASP.NET. Authentication related modules (like Forms authentication and WIF WS-Fed/Sessions) typically subscribe to three events in the pipeline – AuthenticateRequest/PostAuthenticateRequest for pre-processing and EndRequest for post-processing (like making redirects to a login page). In the pre-processing stage it is the modules’ job to determine the identity of the client based on incoming HTTP details (like a header, cookie, form post) and set HttpContext.User and Thread.CurrentPrincipal. The actual page (in the ExecuteHandler event) “sees” the identity that the last module has set. So in our case there are three modules in effect: FormsAuthenticationModule (AuthenticateRequest, EndRequest) WSFederationAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest, EndRequest) SessionAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest) So let’s have a look at the different scenario we have when mixing Forms auth and WS-Federation. Anoymous request to unprotected resource This is the easiest case. Since there is no WIF session cookie or a FormsAuth cookie, these modules do nothing. The WSFed module creates an anonymous ClaimsPrincipal and calls the registered ClaimsAuthenticationManager (if any) to transform it. The result (by default an anonymous ClaimsPrincipal) gets set. Anonymous request to FormsAuth protected resource This is the scenario where an anonymous user tries to access a FormsAuth protected resource for the first time. The principal is anonymous and before the page gets rendered, the Authorize attribute kicks in. The attribute determines that the user needs authentication and therefor sets a 401 status code and ends the request. Now execution jumps to the EndRequest event, where the FormsAuth module takes over. The module then converts the 401 to a redirect (302) to the forms login page. If authentication is successful, the login page sets the FormsAuth cookie.   FormsAuth authenticated request to a FormsAuth protected resource Now a FormsAuth cookie is present, which gets validated by the FormsAuth module. This cookie gets turned into a GenericPrincipal/FormsIdentity combination. The WS-Fed module turns the principal into a ClaimsPrincipal and calls the registered ClaimsAuthenticationManager. The outcome of that gets set on the context. Anonymous request to STS protected resource This time the anonymous user tries to access an STS protected resource (a controller decorated with the RequireTokenAuthentication attribute). The attribute determines that the user needs STS authentication by checking the authentication type on the current principal. If this is not Federation, the redirect to the STS will be made. After successful authentication at the STS, the STS posts the token back to the application (using WS-Federation syntax). Postback from STS authentication After the postback, the WS-Fed module finds the token response and validates the contained token. If successful, the token gets transformed by the ClaimsAuthenticationManager, and the outcome is a) stored in a session cookie, and b) set on the context. STS authenticated request to an STS protected resource This time the WIF Session authentication module kicks in because it can find the previously issued session cookie. The module re-hydrates the ClaimsPrincipal from the cookie and sets it.     FormsAuth and STS authenticated request to a protected resource This is kind of an odd case – e.g. the user first authenticated using Forms and after that using the STS. This time the FormsAuth module does its work, and then afterwards the session module stomps over the context with the session principal. In other words, the STS identity wins.   What about roles? A common way to set roles in ASP.NET is to use the role manager feature. There is a corresponding HTTP module for that (RoleManagerModule) that handles PostAuthenticateRequest. Does this collide with the above combinations? No it doesn’t! When the WS-Fed module turns existing principals into a ClaimsPrincipal (like it did with the FormsIdentity), it also checks for RolePrincipal (which is the principal type created by role manager), and turns the roles in role claims. Nice! But as you can see in the last scenario above, this might result in unnecessary work, so I would rather recommend consolidating all role work (and other claims transformations) into the ClaimsAuthenticationManager. In there you can check for the authentication type of the incoming principal and act accordingly. HTH

    Read the article

  • How do I share different files in a git repo with different people?

    - by David Faux
    In a single directory with a Git root folder, I have a bunch of files. I am working on one of those files, X.py, with my friend Alice. The other files I am working on with other people. I want Alice (and everyone else) to have access to X.py. I want Alice to only have access to X.py though. How can I achieve this with Git? Is there a way I can split a directory into two repos? That sounds rather cumbersome. Maybe I could add a remote repo that Alice can access containing X.py?

    Read the article

  • Desktop sharing options for Ubuntu 12.04 with Unity

    - by Stefan Buynov
    I would like to be able to access my office Ubuntu 12.04 machine from home, from a Mac Mini with Mac OSX. I have a VPN and I am able to access my office machine over SSH, so connectivity is not a problem. I browsed other questions, and it seems that there are several options: VNC XRDP FreeNX (haven't heard this one before) Are there any other? I have been using Remote Desktop on Windows before, and I actually like it. Not sure how well is XRDP implemented. I also used VNC several years back, and I didn't like its performance back then - not sure if things have changed since then. As I said above, the machine I want to access is running Ubuntu 12.04, with Unity. And I am using Unity by choice - I really like it and would like to continue using it :) The client computer is running Mac OSX (Snow Leopard). Based on your previous experience what is the best setup for this environment?

    Read the article

  • Multiple vulnerabilities in Thunderbird

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-2372 Permissions, Privileges, and Access Controls vulnerability 3.5 Thunderbird Solaris 11 11/11 SRU 2 Solaris 10 Contact Support CVE-2011-2995 Denial Of Service (DoS) vulnerability 10.0 CVE-2011-2997 Denial Of Service (DoS) vulnerability 10.0 CVE-2011-2998 Denial Of Service (DoS) vulnerability 10.0 CVE-2011-2999 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2011-3000 Improper Control of Generation of Code ('Code Injection') vulnerability 4.3 CVE-2011-3001 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2011-3005 Denial Of Service (DoS) vulnerability 9.3 CVE-2011-3232 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • 6 Ways To Secure Your Dropbox Account

    - by Chris Hoffman
    Dropbox is a hugely popular cloud storage service beloved by many. Unfortunately, it’s had a history of security problems, ranging from compromised accounts to once allowing access to every Dropbox account without requiring a password for several hours. If you’re using Dropbox, there are a variety of ways you can secure your account against unauthorized access and protect your files even if someone does gain access to your account. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • ssh tunnel to a remote CUPS server

    - by drevicko
    There is a CUPS server beyond a firewall and I'd like to use it's printers. I have ssh access to computers that can access the CUPS server, and can get at the servers web interface by forwarding say port 1631. I cannot forward port 631 as I've not got root access to anything on the servers network. In Ubuntu's 'Printing' control panel, I can enter the address of a server, but I've not been able to connect through the forwarded port (localhost:1631, which is forwarded to the remote CUPS server's 631 port). Any ideas?

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >