Search Results

Search found 7554 results on 303 pages for 'shared secret'.

Page 52/303 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Cant correctly install Lazarus

    - by user206316
    I have a little problem with installing and running Lazarus. I just upgrade ubuntu from 13.04 to 13.10. When i had 13.04, i could install lazarus without any problems, but in 13.10 lazarus magicaly dissapeared, and when i tried install it from ubuntu software center, it said something like in my software resources lazarus-ide-0.9.30.4 doesnt exist. After some research on net i tried delete all files from earlier installations, download deb packages from sourceforge and install them, but when i want to instal fpc-src, error shows up with output: (Reading database ... 100% (Reading database ... 239063 files and directories currently installed.) Unpacking fpc-src (from .../Stiahnut/Lazarus/fpc-src.deb) ... dpkg: error processing /home/richi/Stiahnut/Lazarus/fpc-src.deb (--install): trying to overwrite '/usr/share/fpcsrc/2.6.2/rtl/nativent/tthread.inc', which is also in package fpc-source-2.6.2 2.6.2-5 dpkg-deb (subprocess): decompressing archive member: internal gzip write error: Broken pipe dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg-deb (subprocess): cannot copy archive member from '/home/richi/Stiahnut/Lazarus/fpc-src.deb' to decompressor pipe: failed to write (Broken pipe) when i started lazarus, it of course tell me that it cant find fpc compier and fpc sources. So, please, i really need program for school and i dont wanna reinstall os anymore or something like that :( (Ubuntu 13.10 64bit) P.S: im not skilled in linux so if u know some commands to fix it just write them for copy and paste :) P.P.S:Sorry for bad English, im Slovak xD P.P.P.S: Thank so much for any answers update: output from sudo dpkg -l | grep "^rc" richi@Richi-Ubuntu:~/lazarus1.0.12$ sudo dpkg -l | grep "^rc" rc account-plugin-generic-oauth 0.10bzr13.03.26-0ubuntu1.1 amd64 GNOME Control Center account plugin for single signon - generic OAuth rc appmenu-gtk:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc appmenu-gtk3:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc fp-compiler-2.6.0 2.6.0-9 amd64 Free Pascal - compiler rc fp-utils-2.6.0 2.6.0-9 amd64 Free Pascal - utilities rc lazarus-ide-0.9.30.4 0.9.30.4-4 amd64 IDE for Free Pascal - common IDE files rc lazarus-ide-1.0.10 1.0.10+dfsg-1 amd64 IDE for Free Pascal - common IDE files rc lcl-utils-0.9.30.4 0.9.30.4-4 amd64 Lazarus Components Library - command line build tools rc lcl-utils-1.0.10 1.0.10+dfsg-1 amd64 Lazarus Components Library - command line build tools rc libbamf3-1:amd64 0.4.0daily13.06.19~13.04-0ubuntu1 amd64 Window matching library - shared library rc libboost-filesystem1.49.0 1.49.0-4 amd64 filesystem operations (portable paths, iteration over directories, etc) in C++ rc libboost-signals1.49.0 1.49.0-4 amd64 managed signals and slots library for C++ rc libboost-system1.49.0 1.49.0-4 amd64 Operating system (e.g. diagnostics support) library rc libboost-thread1.49.0 1.49.0-4 amd64 portable C++ multi-threading rc libbrlapi0.5:amd64 4.4-8ubuntu4 amd64 braille display access via BRLTTY - shared library rc libcamel-1.2-40 3.6.4-0ubuntu1.1 amd64 Evolution MIME message handling library rc libcolumbus0-0 0.4.0daily13.04.16~13.04-0ubuntu1 amd64 error tolerant matching engine - shared library rc libdns95 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 DNS Shared Library used by BIND rc libdvbpsi7 0.2.2-1 amd64 library for MPEG TS and DVB PSI tables decoding and generating rc libebackend-1.2-5 3.6.4-0ubuntu1.1 amd64 Utility library for evolution data servers rc libedata-book-1.2-15 3.6.4-0ubuntu1.1 amd64 Backend library for evolution address books rc libedata-cal-1.2-18 3.6.4-0ubuntu1.1 amd64 Backend library for evolution calendars rc libgc1c3:amd64 1:7.2d-0ubuntu5 amd64 conservative garbage collector for C and C++ rc libgd2-xpm:amd64 2.0.36~rc1~dfsg-6.1ubuntu1 amd64 GD Graphics Library version 2 rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6.1ubuntu1 i386 GD Graphics Library version 2 rc libgnome-desktop-3-4 3.6.3-0ubuntu1 amd64 Utility library for loading .desktop files - runtime files rc libgphoto2-2:amd64 2.4.14-2 amd64 gphoto2 digital camera library rc libgphoto2-2:i386 2.4.14-2 i386 gphoto2 digital camera library rc libgphoto2-port0:amd64 2.4.14-2 amd64 gphoto2 digital camera port library rc libgphoto2-port0:i386 2.4.14-2 i386 gphoto2 digital camera port library rc libgtksourceview-3.0-0:amd64 3.6.3-0ubuntu1 amd64 shared libraries for the GTK+ syntax highlighting widget rc libgweather-3-1 3.6.2-0ubuntu1 amd64 GWeather shared library rc libharfbuzz0:amd64 0.9.13-1 amd64 OpenType text shaping engine rc libibus-1.0-0:amd64 1.4.2-0ubuntu2 amd64 Intelligent Input Bus - shared library rc libical0 0.48-2 amd64 iCalendar library implementation in C (runtime) rc libimobiledevice3 1.1.4-1ubuntu6.2 amd64 Library for communicating with the iPhone and iPod Touch rc libisc92 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 ISC Shared Library used by BIND rc libkms1:amd64 2.4.46-1 amd64 Userspace interface to kernel DRM buffer management rc libllvm3.2:i386 1:3.2repack-7ubuntu1 i386 Low-Level Virtual Machine (LLVM), runtime library rc libmikmod2:amd64 3.1.12-5 amd64 Portable sound library rc libpackagekit-glib2-14:amd64 0.7.6-3ubuntu1 amd64 Library for accessing PackageKit using GLib rc libpoppler28:amd64 0.20.5-1ubuntu3 amd64 PDF rendering library rc libraw5:amd64 0.14.7-0ubuntu1.13.04.2 amd64 raw image decoder library rc librhythmbox-core6 2.98-0ubuntu5 amd64 support library for the rhythmbox music player rc libsdl-mixer1.2:amd64 1.2.12-7ubuntu1 amd64 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsnmp15 5.4.3~dfsg-2.7ubuntu1 amd64 SNMP (Simple Network Management Protocol) library rc libsyncdaemon-1.0-1 4.2.0-0ubuntu1 amd64 Ubuntu One synchronization daemon library rc libunity-core-6.0-5 7.0.0daily13.06.19~13.04-0ubuntu1 amd64 Core library for the Unity interface. rc libusb-0.1-4:i386 2:0.1.12-23.2ubuntu1 i386 userspace USB programming library rc libwayland0:amd64 1.0.5-0ubuntu1 amd64 wayland compositor infrastructure - shared libraries rc linux-image-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc screen-resolution-extra 0.15ubuntu1 all Extension for the GNOME screen resolution applet rc unity-common 7.0.0daily13.06.19~13.04-0ubuntu1 all Common files for the Unity interface.

    Read the article

  • Tweepy + App Engine Example OAuth Help

    - by Wasauce
    Hi I am trying to follow the Tweepy App Engine OAuth Example app in my app but am running into trouble. Here is a link to the tweepy example code: http://github.com/joshthecoder/tweepy-examples Specifically look at: http://github.com/joshthecoder/tweepy-examples/blob/master/appengine/oauth_example/handlers.py Here is the relevant snippet of my code [Ignore the spacing problems]: try: authurl = auth.get_authorization_url() request_token = auth.request_token db_user.token_key = request_token.key db_user.token_secret = request_token.secret db_user.put() except tweepy.TweepError, e: # Failed to get a request token self.generate('error.html', { 'error': e, }) return self.generate('signup.html', { 'authurl': authurl, 'request_token': request_token, 'request_token.key': request_token.key, 'request_token.secret': request_token.secret, }) As you can see my code is very similar to the example. However, when I compare the version of the request_token.key and request_token.secret that are rendered on my signup page (this is for the request_token.key and request_token.secret found in the datastore. Any guidance on what I am doing wrong here? Thanks! Reference Links:

    Read the article

  • java equivalent to php's hmac-SHA1

    - by Bee
    I'm looking for a java equivalent to this php call: hash_hmac('sha1', "test", "secret") I tried this, using java.crypto.Mac, but the two do not agree: String mykey = "secret"; String test = "test"; try { Mac mac = Mac.getInstance("HmacSHA1"); SecretKeySpec secret = new SecretKeySpec(mykey.getBytes(),"HmacSHA1"); mac.init(secret); byte[] digest = mac.doFinal(test.getBytes()); String enc = new String(digest); System.out.println(enc); } catch (Exception e) { System.out.println(e.getMessage()); } The outputs with key = "secret" and test = "test" do not seem to match.

    Read the article

  • Twitter oauth php problems

    - by Patrick Gates
    I'm writing some backend script for a twitter app and heres how it's going On the app you click a button that sends you to login.php on my server which logs into my database connects to twitter with my consumer key and secret: $to = new TwitterOAuth($consumer_key, $consumer_secret); $tok = $to->getRequestToken(); $request_link = $to->getAuthorizeURL($tok); and then writes the token and secret to the database, sets a session equal to the id in the database of the token and secret and then redirects to the "$request_link" You then go through the process of logging in and such on twitter and it redirects you to callback.php on my server Callback.php consists of logging into the database again, getting the new token and secret, and then writing the new token and secret to the database and then prompts you to go back to the app Then on the app, all I'm trying to do is access the basic credentials$to->get('account/verify_credentials') and it keeps coming back "could not authenticate you" What am I doing wrong?? Thank you for all the help :)

    Read the article

  • Azure application working on emulator but not on azure cloud

    - by Hisham Riaz
    firstly i am developing my MVC3 application on visual web developer 2010 express, by migrating my MVC3 (cshtml) files on MVC2. it works great on local system using the emulator, but once i deploy the application on azure it gives runtime errors. example: The layout page "~/Views/Shared/test_page.cshtml" could not be found at the following path: "~/Views/Shared/test_page.cshtml". Source Error: Line 8: //Layout = "~/Views/Shared/upload.cshtml"; Line 9: //Layout = "~/Views/Shared/_Layout2.cshtml"; Line 10: Layout = "~/Views/Shared/test_page.cshtml"; Line 11: } Line 12: else CODE IS AS FOLLOWS: _ViewStart.cshtml file @{ string AccId = Request.QueryString["AccId"].ToString(); if (AccId=="0") { //Layout = "~/Views/Shared/upload.cshtml"; //Layout = "~/Views/Shared/_Layout2.cshtml"; Layout = "~/Views/Shared/test_page.cshtml"; } else { string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath(AccId); Layout = LayOutPagePath; } } ......... how ever the page exist, and is working fine on azure emulator, but not in azure cloud. CODE FOR test_page.cshtml @{ var result = "1234567890"; var temp_xml = MVCTest.Models.ComponentClass.GetTemplateAndTheme("1");//returning xml string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath("1");//returning string } @RenderBody() test_page @temp_xml @result @LayOutPagePath

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • Slicing the EDG

    - by Antony Reynolds
    Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages. Super Domain This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain. This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Single Administration Point (1 Admin Server) Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Drawbacks Patching is an all or nothing affair. Startup time for SOA may be slow if large number of composites deployed. Multiple Domains This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc. SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Startup time for SOA may be slow if large number of composites deployed. Shared Service Environment This model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Shared SOA Domain hosts Human Workflow Tasks BAM Common "utility" composites Single OSB domain provides "Enterprise Service Bus" All domains use same OWSM policy store (MDS-WSM) Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Supports large numbers of deployed composites in multiple domains. Single URL for Human Workflow end users. Single URL for BAM end users. Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Human Workflow needs to be specially configured to point to shared services domain. Summary The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

    Read the article

  • Should the hostname of my VPS point to the dedi IP of my Domain or to to a shared one used for new account creation?

    - by thomas
    I leased a VPS which I want to use to sell shared hosting. 3 IPs - I call them A, B and C here for simplicity. Actual setup is: A=NS1.mydomain.com; host.mydomain.com and is used to set-up new accounts in shared environment B=NS2.mydomain.com C=dedicated IP for mydomain.com (SSL secured) The more I read about DNS, the more I get confused; thus my question: Is this configuration "Good Practice", especially the hostname pointing to A rather than to C? And what would be a better alternative?

    Read the article

  • libclntsh.so.11.1: cannot open shared object file.

    - by zhangzhong
    I want to schedule a task on linux by icrontab, and the task is written in python and have to import cx_Oracle module, so I export ORACLE_HOME and LD_LIBRARY_PATH in .bash_profile, but it raise the error: libclntsh.so.11.1: cannot open shared object file. Since it is ok to run the task by issue the command in shell like python a.py # ok I change the task in icrontab into a shell script which invoke my python script, but the exception recurred? # the shell script scheduled in icrontab #! bash python a.py Could you help how to do with it?

    Read the article

  • Custom ASP.NET MVC cache controllers in a shared hosting environment?

    - by Daniel Crenna
    I'm using custom controllers that cache static resources (CSS, JS, etc.) and images. I'm currently working with a hosting provider that has set me up under a full trust profile. Despite being in full trust, my controllers fail because the caching strategy relies on the File class to directly open a resource file prior to treatment and storage in memory. Is this something that would likely occur in all full trust shared hosting environments or is this specific to my host? The static files live within my application's structure and not in an arbitrary server path. It seems to me that custom caching would require code to access the file directly, and am hoping someone else has dealt with this issue.

    Read the article

  • Google's Oauth for Installed apps vs. Oauth for Web Apps

    - by burgerguy
    So I'm having trouble understanding something... If you do Oauth for Web Apps, you register your site with a callback URL and get a unique consumer secret key. But once you've obtained an Oauth for Web Apps token, you don't have to generate Oauth calls to the google server from your registered domain. I regularly use my key and token from scripts running via an apache server at localhost on my laptop and Google never says "you're not sending this request from the registered domain." It just sends me the data. Now, as I understand it, if you do Oauth for Installed Apps, you use "anonymous" instead of a secret key you got from Google. I've been thinking of just using the OAuth for Web Apps auth method, then passing that token to an installed app that has my secret code embedded in its innards. The worry is that the code could be discovered by bad people. But what's more secure... making them work for the secret code or letting them default to anonymous? What really goes bad if the "secret" is discovered when the alternative is using "anonymous" as the secret?

    Read the article

  • Handling changes in an interface shared across multiple solutions?

    - by Anthony Mastrean
    Our "main" solution is the development code: shared libraries, services, UI projects, etc. The other solution is an integration and automated tests solution. It references several of the development projects. The reason it is separate is to avoid interference with the development solution's unit test VSMDI file. And to allow us to play with different execution methods (other test runners, like Gallio or StoryTeller) without interfering with the development solution. Recently, an interface changed in the development solution, one of our test mocks implemented that interface. But, it was not updated because there was no warning at compile time because it was in another solution. This broke our CI build. Does anyone have a similar setup? How do you handle these issues, do you follow a strict procedure or is there some kind of technical answer?

    Read the article

  • Gluster strange issue with shared mount point like seprate mount.

    - by Satish
    I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate. node1 10.101.140.10:/nova-gluster-vol 2.0G 820M 1.2G 41% /mnt node2 10.101.140.10:/nova-gluster-vol 2.0G 33M 2.0G 2% /mnt volume info split brian $ sudo gluster volume heal nova-gluster-vol info split-brain Gathering Heal info on volume nova-gluster-vol has been successful Brick 10.101.140.10:/brick1/sdb Number of entries: 0 Brick 10.101.140.20:/brick1/sdb Number of entries: 0 test node1 $ echo "TEST" > /mnt/node1 $ ls -l /mnt/node1 -rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1 node2 (file isn't there, while they are shared mount) $ ls -l /mnt/node1 ls: cannot access /mnt/node1: No such file or directory What i am missing??

    Read the article

  • How to deploy ASP.NET MVC application in a shared hosting without losing the beautiful uri?

    - by ZX12R
    i am trying to upload an asp.net mvc application in a shared server. I don't have access to IIS. What i have done after reading from various sources is change the route method in the global.asax file as follows routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}.aspx/{action}/{id}", new { action = "Index", id = "" } ); routes.MapRoute( "Root", "", new { controller = "Home", action = "Index", id = "" } ); it is working fine. but the problem is i have lost the beautiful uri here. is there a way to remove the ".aspx" behind the controller..? thanks in advance.:)

    Read the article

  • How to create a Appointment in a Shared Calendar (Sharepoint) with VBA (Macro)?

    - by Diogo K.
    I am actually trying to make an appointment from an excel spreadsheet. I have all the information of the appointment, like subject, body, start and end dates, I actually can create an appointment but only with my personal calendar in outlook. How do I copy/move/create an appointment in a shared calendar in a sharepoint server? I've tried: Dim apOL As Object Dim objFolder As Folder Dim cro As String Set apOL = CreateObject("Outlook.Application") Set oItem = apOL.CreateItem(olApItem) Set MAPISession = apOL.Session ... cro = "stssync://sts/?ver=1.0&type=calendar&cmd=add-folder&base-url=(MY SP SERVER)&list-url=%2FLists%2FCronograma%20%20%2Fcalendar%2Easpx&guid=%7B02717CEF%2D404F%2D482F%2DA131%2D5C3C245CD268%7D&site-name=Testes&list-name=Cronograma%20-" ... Set objFolder = MAPISession.OpenSharedFolder(cro, Null,Null,Null) It gives me the error "Type Mismatch" I'd try to get the objFolder as the Sharepoint Folder then later create an local appointment and then try an Item.Move objFolder Is it the correct way?

    Read the article

  • Moving from dedicated to shared cpanel - any scripts to do all / some of the install tasks ?

    - by mbbcat
    Hi, I have a few hundred phpld sites to move - each has its own cpanel, ( & the target may have shared cpanel) & I can do a full cpanel backup on the original server, but I don't have whm on the current host - the backups are fairly easy to organize but the installs so far means picking through files & setting up db's & mail etc by hand - I am thinking there ought to be an easier ie scripted way to do the installs or at least some parts - can anyone please suggest something ? I would like to migrate the stats at the same time Thanks M

    Read the article

  • Could/Should I use static classes in asp.net/c# for shared data?

    - by death.au
    Here's the situation I have: I'm building an online system to be used by school groups. Only one school can log into the system at any one time, and from that school you'll get about 13 users. They then proceed into a educational application in which they have to co-operate to complete tasks, and from a code point of view, sharing variables all over the place. I was thinking, if I set up a static class with static properties that hold the variables that are required to be shared, this could save me having to store/access the variables in/from a database, as long as the static variables are all properly initialized when the application starts and cleaned up at the end. Of course I would also have to put locks on the get and set methods to make the variables thread safe. Something in the back of my mind is telling me this might be a terrible way of going about things, but I'm not sure exactly why, so if people could give me their thoughts for or against using a static class in this situation, I would be quite appreciative.

    Read the article

  • How do I setup a shared session between two users on my Ruby on Rails powered site?

    - by ben
    Hey guys, The website that I'm building includes a section where two users can interact. I think I know how to do most of it, except the actual session sharing part. I'm using Ruby on Rails & Javascript (jquery), and I've got user login and session management all working okay. Would the best way to create a shared session be to have a SharedSession model, with accompanying database table, with participant1ID, participant2ID etc? Is there a better way? Thanks so much for reading!

    Read the article

  • Is it OK to run an array with 22k strings in a PHP code on a shared web host?

    - by kuchikoo
    I'm new to writing code so kindly bear with me if this is a very noobish question. A couple of days back I asked a question about a PHP code that matches the the query entered by users on my website to an array stored within the PHP code and displays the matched queries. Here is the code I'm talking about Now I've ended up with a rather large list (over 22k) of strings that have to be stored in the array. Is it ok to run it like this? I'm hosting the site on a shared hostgator package, will this cause my site to crash? I don't know too much about DBs but can I somehow store this on MySQL instead of having it in the code?

    Read the article

  • NHibernate will insert but not update after move to host with shared server running mysql.

    - by Andy LifeBrixx
    Hi, I have a site running MVC and Nhibernate (not fluent) using standard session per request in an http module, runs fine locally (also with mysql) but after a move to a hosting provider no update statements are being issued. I can insert but not update, no exceptions are raised, I have the 'show_sql' option switched on which locally shows the update statements being issued but on the server no update statements are logged. I don't think NHProf is an option for me as I can only run asp.net apps on my shared server, are there any other methods of diagnosing NH issues like this ? Anyone had a similar issue ? Cheers, A

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • command history across multiple PuTTy sessions in SunOS 5.10

    - by foampile
    I have multiple PuTTy sessions open to my SunOS 5.10 server, and I am using ksh, and SOMETIMES the command history is shared among the different sessions and SOMETIMES it is not. I cannot figure out what determines whether it is or is not shared. By shared what I mean is that a command run in one session will be seen as previous command run in another session. I prefer it not to be shared, is there a config setting for that? Thanks

    Read the article

  • Syntax error on running a batch file to replace files

    - by Ralph
    I have a batch file intended to replace all instances of tracking.js within a folder/sub folders. FOR /R "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2\" %%I IN (tracking.js*) DO COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\tracking.js" %%~fI When this is run I get the following syntax error C:COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\track ing.js" D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2 \SHAPERS_COMBINED\Smarter Communications\WhatisInfluencing\script\Tracking.js The syntax of the command is incorrect. Ideas please?

    Read the article

  • ack misses results (vs. grep)

    - by techpeace
    I'm sure I'm misunderstanding something about ack's file/directory ignore defaults, but perhaps somebody could shed some light on this for me: mbuck$ grep logout -R app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak: <%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> mbuck$ ack logout app/views/ mbuck$ Whereas... mbuck$ ack -u logout app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak 98:<%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> Simply calling ack without options can't find the result within a .bak file, but calling with the --unrestricted option can find the result. As far as I can tell, though, ack does not ignore .bak files by default.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >