Search Results

Search found 34056 results on 1363 pages for 'mod access'.

Page 444/1363 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • How do I get public feed from facebook without user authentication on a native/Desktop app?

    - by KronoS
    I'm looking to get publicly available facebook feeds (i.e. Google's facebook page/posts). However instead of forcing the user to sign into their own facebook app, I want to be able to access these posts. I've looked into using "App Access Tokens" however since my application is a native/Desktop app (iOS, Android, WP8/Win 8) I'm not able to do this. Is there a way to get publicly accessible feeds from facebook without user authentication? I'm using the Facebook C# SDK to access facebook. Currently I'm doing the following: dynamic tokenInfo = fb.Get( String.Format( "/oauth/access_token?client_id={0}&client_secret={1}&grant_type=client_credentials", FbController.AppId, FbController.AppSecret)); var appAccessToken = (string) tokenInfo.access_token; fb = new FacebookClient(); dynamic response = fb.Get( String.Format( "/google/posts?access_token={0}", appAccessToken)); Problem is that this only works if my application is set to "web" instead of "native/Desktop". I get the following error when running this code and classified app as native/Desktop. (OAuthException - #15) (#15) Requires session when calling from a desktop app

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • Using Ghostscript in a Webapplication (PDF Thumbnails)

    - by cpt.oneeye
    Hello, i am using the ghostscriptsharp wrapper for c# and ghostscript. I want to generate thumbnails out of pdf-files. Further Information on the sample-code are given here. There are different Methods imported form the ghostscript-c-dll "gsdll32.dll". [DllImport("gsdll32.dll", EntryPoint = "gsapi_new_instance")] private static extern int CreateAPIInstance(out IntPtr pinstance, IntPtr caller_handle); [DllImport("gsdll32.dll", EntryPoint = "gsapi_init_with_args")] private static extern int InitAPI(IntPtr instance, int argc, IntPtr argv); //...and so on I am using the GhostscriptWrapper for generating the thumbnails in a webapplication (.net 2.0). This class uses the methods imported above. protected void Page_Load(object sender, EventArgs e){ GhostscriptWrapper.GeneratePageThumb("c:\\sample.pdf", "c:\\sample.jpg", 1, 100, 100); } When i debug the Web-Application in Visual Studio 2008 by hitting key "F5" it works fine (a new instance of webserver is generated). When i create a WindowsForm Application it works too. The thumbnails get generated. When i access the application with the webbrowser directly (http://localhoast/mywebappliation/..) it doesn't work. No thumbnails are generated. But there is also no exception thrown. I placed the gsdll32.dll in the system32-folder of windows xp. The Ghostscript Runtime is installed too. I have given full access in the IIS-Webproject (.Net 2.0). Does anybody know why i can't access Ghostscript from my webapplication? Are there any security-issues for accessing dll-files on the IIS-Server? Greetings Klaus

    Read the article

  • Silverlight 4 - MVC 2 ASP.NET Membership integration "single sign on"

    - by Scrappydog
    Scenario: I have an ASP.NET MVC 2 site using ASP.NET Forms Authentication. The site includes a Silverlight 4 application that needs to securely call internal web services. The web services also need to be publically exposed for third party authenticated access. Challenges: Securely accessing webservices from Silverlight using the current users identity without requiring the user to re-login in in the Silverlight application. Providing a secure way for third party applications to access the same webservices the same users credentials, ideally with out using ASP.NET Forms Authentication. Additional details and limitations: This application is hosted in Azure. We would rather NOT use RIA Services if at all possible. Solutions Under Consideration: I think that if the webservices are part of the same MVC site that hosts the Silverlight application then forms authentication should probably "just work" from Silverlight based on the users forms auth cookies. But this seems to rule out the possibility of hosting the webservices seperately (which is desirable in our scenario). For third-party access to the web services I'm guessing that seperate endpoints with a different authenication solution is probably the right answer, but I would rather only support one version of the services if possible... Questions: Can anybody point me towards any sample applications that implements something like this? How would you recommend implementing this solution?

    Read the article

  • class weblogic.management.WeblogicMBean not found

    - by khue
    Hi all I meet this problem when I try to run Junit test case in fork mode (starting each test in a separate JVM) using Build ant file. [junit] Exception in thread "main" java.lang.NoClassDefFoundError: weblogic/management/WebLogicMBean [junit] at java.lang.ClassLoader.defineClass1(Native Method) [junit] at java.lang.ClassLoader.defineClass(ClassLoader.java:621) [junit] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) [junit] at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) [junit] at java.net.URLClassLoader.access$000(URLClassLoader.java:56) [junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:195) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:188) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:307) [junit] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:252) [junit] at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) [junit] at java.lang.ClassLoader.defineClass1(Native Method) [junit] at java.lang.ClassLoader.defineClass(ClassLoader.java:621) [junit] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) [junit] at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) [junit] at java.net.URLClassLoader.access$000(URLClassLoader.java:56) [junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:195) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:188) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:307) [junit] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:252) [junit] at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) [junit] at java.lang.ClassLoader.defineClass1(Native Method) [junit] at java.lang.ClassLoader.defineClass(ClassLoader.java:621) [junit] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) [junit] at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) [junit] at java.net.URLClassLoader.access$000(URLClassLoader.java:56) [junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:195) .... I have the library weblogic.jar in my build library folders, which is set as classpath for the junit task. I look at this file and can't find the WeblogicMBean.class inside. However, in Jdev, I can import weblogic.management.WeblogicMBean into my class if I set library reference to this weblogic.jar file and compile my class without problem. Any suggestion of what really goes wrong? Thanks a lot.

    Read the article

  • How to share session cookies between Internet Explorer and an ActiveX components hosted in a webpage

    - by jerem
    I am currently working on a .Net application which makes HTTP requests to some web applications hosted on a IIS server. The application is deployed through ClickOnce and is working fine on simple networks architectures. One of our customers has a very complex network involving a custom authentication server on which the user has first to log himself in order to be authenticated and get access to other applications on this network. Once authenticated on this server, a session cookie is created and sent to the user. Every time the user then makes a request on a secured server of the network, this cookie is checked to grant access to the user. If this cookie is not sent with the request, the user is redirected to the login page. The only browser used is Internet Explorer. This cookie cannot be accessed from our .net application since it is executed in another process than the Internet Explorer process which was used to log the user in, and thus is not sent with our requests, which cannot be completed since the server redirects every of our requests to the login page. I had a look at embedding my application into Internet Explorer by making the main control COM visible and creating it on an HTML page with an tag. It is working properly, however the sessions cookies set earlier in the browser are not sent when the ActiveX control makes web requests. I was hoping this sharing of the session information would be automatic (although I didn't really believe it). So my questions are : Is it possible to have access to this cookie in the embedded ActiveX? How? Does it make a difference to use a .Net COM-interop component instead of a "true" ActiveX control? Also, are there specific security words to describe this kind of behaviors (given that I am not an expert at all on security topics, this lack of proper terminology makes it a lot harder to find the needed resources)? My goal is to have my application's requests look the same from the requests made by the host browser's requests, and I thought that embedding the application as an ActiveX control into the browser was the only way to achieve this, however any suggestion on another to do this is welcome.

    Read the article

  • Help catching AV with WinDbg and ADPlus 7.0

    - by Stoune
    I want to catch Memory Access Violation in SQL Server Compact Edition like this described at http://debuggingblog.com/wp/2009/02/18/memory-access-violation-in-sql-server-compact-editionce/ The suggested config is: CRASH Quiet MyApp.exe NoDumpOnFirstChance clr;av FullDump gn I download latest Debugging Tools and observe what Microsoft rewrite adplus tool into managed code and change syntax of config File. I rewrite config file like this: <ADPlus Version="2"> <Settings> <RunMode>Crash</RunMode> <Option>Quiet</Option> <Option>NoDumpOnFirst</Option> <Sympath>c:\symbols\</Sympath> <OutputDir>c:\work\output\</OutputDir> <ProcessName>c:\work\app\output\MyApp.exe</ProcessName> </Settings> <Exceptions><!--to get the full dump on clr access violation--> <Exception Code="clr;av"> <Actions1>FullDump</Actions1> <ReturnAction1>gn</ReturnAction1> </Exception> </Exceptions> </ADPlus> And I get error "Couldn't find exception with code: clr;av". If I understand right It didn't load sos extension, but I can't find the right section and syntax that I should use to load it. adplus_old.vbs - for some reasons didn't launch process on Windows 7. WinDBG 6.12.0002.633 X86 ADPlus Engine Version: 7.01.002 02/27/2009 Maybe someone has a working example of config of debugging .NET app with latest adplus.exe?

    Read the article

  • EJB3 JNDI Lookup Failure in JEE application client

    - by Hank
    I'm trying to access an EJB3 from a JEE client-application, but keep getting nothing but lookup failures. My JEE Application 'CoreServer' is exposing a number of beans with remote interfaces. I have no problem accessing them from a Web Application deployed on the same Glassfish v3.0.1. Now I'm trying to access it from a client-application: public class Main { public static void main(String[] args) { CampaignControllerRemote bean = null; try { InitialContext ctx = new InitialContext(); bean = (CampaignControllerRemote) ctx.lookup("java:global/CoreServer/CampaignController"); } catch (Exception e) { System.out.println(e.getMessage()); } if (bean != null) { Campaign campaign = bean.get(361); if (campaign != null) { System.out.println("Got "+ campaign); } } } } When I run deploy it to Glassfish and run it from the appclient, I get this error: Lookup failed for 'java:global/CoreServer/CampaignController' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=localhost,orb'sInitialPort=3700 However, that's exactly the same JNDI-name I use when I lookup the bean from the WebApplication (via SessionContext, not InitialContext - does that matter?). Also, when I deploy 'CoreServer', Glassfish reports: Portable JNDI names for EJB CampaignController : [java:global/CoreServer/CampaignController!mvs.api.CampaignControllerRemote, java:global/CoreServer/CampaignController] Glassfish-specific (Non-portable) JNDI names for EJB CampaignController : [mvs.api.CampaignControllerRemote, mvs.api.CampaignControllerRemote#mvs.api.CampaignControllerRemote] I tried all four names, none worked. Is the appclient unable to access beans with (only) Remote interfaces?

    Read the article

  • EF4 querying from parent to grandchildren

    - by Hans Kesting
    I have a model withs Parents, Children and Grandchildren, in a many-to-many relationship. Using this article I created POCO classes that work fine, except for one thing I can't yet figure out. When I query the Parents or Children directly using LINQ, the SQL reflects the LINQ query (a .Count() executes a COUNT in the database and so on) - fine. The Parent class has a Children property, to access it's children. But (and now for the problem) this doesn't expose an IQueryable interface but an ICollection. So when I access the Children property on a particular parent all the Parent's Children are read. Even worse, when I access the Grandchildren (theParent.Children.SelectMany(child => child.GrandChildren).Count()) then for each and every child a separate request is issued to select all data of it's grandchildren. That's a lot of separate queries! Changing the type of the Children property from ICollection to IQueryable doesn't solve this. Apart from missing methods I need, like Add() and Remove(), EF just doesn't recognize the navigation property then. Are there correct ways (as in: low database interaction) of querying through children (and what are they)? Or is this just not possible?

    Read the article

  • How to get current Joomla user with external PHP script

    - by Joey Adams
    I have a couple PHP scripts used for AJAX queries, but I want them to be able to operate under the umbrella of Joomla's authentication system. Is the following safe? Are there any unnecessary lines? joomla-auth.php (located in the same directory as Joomla's index.php): <?php define( '_JEXEC', 1 ); define('JPATH_BASE', dirname(__FILE__)); define( 'DS', DIRECTORY_SEPARATOR ); require_once ( JPATH_BASE .DS.'includes'.DS.'defines.php' ); require_once ( JPATH_BASE .DS.'includes'.DS.'framework.php' ); /* Create the Application */ $mainframe =& JFactory::getApplication('site'); /* Make sure we are logged in at all. */ if (JFactory::getUser()->id == 0) die("Access denied: login required."); ?> test.php: <?php include 'joomla-auth.php'; echo 'Logged in as "' . JFactory::getUser()->username . '"'; /* We then proceed to access things only the user of that name has access to. */ ?>

    Read the article

  • Gmail IMAP OAuth for desktop clients

    - by Sabya
    Recently Google announced that they are supporting OAUth for Gmail IMAP/SMTP. I browsed through their multiple documentations, but still I am confused about if they support OAuth for installed applications. 1. In this documentation they say: Note: Though the OAuth protocol supports the desktop/installed application use case, Google only supports OAuth for web applications. But they also have a document for OAuth for installed applications. 2. When I read the OAuth specification pointed by them, it says (in section 11.7): In many applications, the Consumer application will be under the control of potentially untrusted parties. For example, if the Consumer is a freely available desktop application, an attacker may be able to download a copy for analysis. In such cases, attackers will be able to recover the Consumer Secret used to authenticate the Consumer to the Service Provider. Also I think the disclaimer in point 1 above is about Google Data APIs, and surely IMAP/SMTP is not a part of them. I understand that for installed applications I can have a setup like: Have a small web-app at say example.com for my application. This web-app talks to Google gets the access token. The installed application talks to example.com only to get the access token. Installed application then talks to Google with the access token. I am now confused. Is this the only way?

    Read the article

  • Linq - how does it work??

    - by clarkeyboy
    Hey, I have just been looking into Linq with ASP.Net. It is very neat indeed. I was just wondering - how do all the classes get populated? I mean in ASP.Net, suppose you have a Linq file called Catalogue, and you then use a For loop to loop through Catalogue.Products and print each Product name. How do the details get stored? Does it just go through the Products table on page load and create another instance of class Product for each row, effectively copying an entire table into an array of class Product? If so, I think I have created a system very much like this, in the sense that there is a SiteContent module with an instance of each Manager class - for example there is UserManager, ProductManager, SettingManager and alike. UserManager contains an instance of the User class for each row in the Users table. They also contain methods such as Create, Update and Remove. These Managers and their "Items" are created on every page load. This just makes it nice and easy to access users, products, settings etc in every page as far as I, the developer, am concerned. Any any subsequent pages I need to create, I just need to reference SiteContent.UserManager to access a list of users, rather than executing a query from within that page (ie this method separates out data access from the workings of the page, in the same way as using code behind separates out the workings of the page from how the page is layed out). However the problem is that this technique seems rather slow. I mean it is effectively creating a database on every page load, taking data from another database. I have taken measures such as preventing, for example, the ProductManager from being created if it is not referenced on page load. Therefore it does not load data into storage when it is not needed. My question is basically whether my technique does the exact same thing as Linq, in the sense of duplicating data from tables into properties of classes.. Thanks in advance for any advice or answers about this. Regards, Richard Clarke

    Read the article

  • IE History Tracking, IFRAMES, and Cross Domain error...

    - by peiklk
    So here's the deal. We have a Flash application that is running within an HTML file. For one page we call a legacy reporting system in ASP.NET that is within an IFRAME. This page then communicates back to the Flash application using cross-domain scripting (document.domain = "domain" is set in both pages. THIS ALL WORKS. Now the kicker. Flash has history tracking enabled. This loads the history.js file that created a div tag to store page changes so the back and forward buttons work in the browser. Which works for Firefox and Chrome as they create a div tag. HOWEVER In Internet Explorer, history.js creates another IFRAME (instead of a DIV) called ie_historyFrame. When the ScriptResource.axd code attempts to access this with: var frameDoc = this._historyFrame.contentWindow.document; we get an "Access is Denied" error message. ARGH! We've tried getting a handle to this IFRAME and inserting the document.domain code. FAIL. We've tried editing the historytemplate.html file that flex also uses to include document.domain... FAIL. I've tried to edit the underlying ASP.NET page to disable history tracking in the ScriptManager control. FAIL. At my wit's end on this one. We have users who need to use IE to access this site. They are big clients who we cannot tell to just use Firefox. Any suggestions would be greatly appreciated.

    Read the article

  • Rails: Should partials be aware of instance variables?

    - by Alexandre
    Ryan Bates' nifty_scaffolding, for example, does this edit.html.erb <%= render :partial => 'form' %> new.html.erb <%= render :partial => 'form' %> _form.html.erb <%= form_for @some_object_defined_in_action %> That hidden state makes me feel uncomfortable, so I usually like to do this edit.html.erb <%= render :partial => 'form', :locals => { :object => @my_object } %> _form.html.erb <%= form_for object %> So which is better: a) having partials access instance variables or b) passing a partial all the variables it needs? I've been opting for b) as of late, but I did run into a little pickle: some_action.html.erb <% @dad.sons.each do |a_son| %> <%= render :partial => 'partial', :locals => { :son => a_son } %> <% end %> _partial.html.erb The son's name is <%= son.name %> The dad's name is <%= son.dad.name %> son.dad makes a database call to fetch the dad! So I would either have to access @dad, which would be going back to a) having partials access instance variables or I would have to pass @dad in locals, changing render :partial to <%= render :partial = 'partial', :locals = { :dad = @dad, :son = a_son } %, and for some reason passing a bunch of vars to my partial makes me feel uncomfortable. Maybe others feel this way as well. Hopefully that made some sense. Looking for some insight into this whole thing... Thanks!

    Read the article

  • Practical value for concurrent-request-timeout parameter

    - by Andrei
    In the Seam Reference Guide, one can find this paragraph: We can set a sensible default for the concurrent request timeout (in ms) in components.xml: <core:manager concurrent-request-timeout="500" /> However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access. In our application we have a combination of page scoped ajax requests (triggered by various use actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages. Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site. After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment. And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option. Any other suggestions? TIA, Andrei

    Read the article

  • IPhone app with SSL client certs

    - by Pavel Georgiev
    I'm building an iphone app that needs to access a web service over https using client certificates. If I put the client cert (in pkcs12 format) in the app bundle, I'm able to load it into the app and make the https call (largely thanks to stackoverflow.com). However, I need a way to distribute the app without any certs and leave it to the user to provide his own certificate. I thought I would just do that by instructing the user to import the certificate in iphone's profiles (settings-general-profiles), which is what you get by opening a .p12 file in Mail.app and then I would access that item in my app. I would expect that the certificates in profiles are available through the keychain API, but I guess I'm wrong on that. 1) Is there a way to access a certificate that I've already loaded in iphone's profile in my app? 2) What other options I have for loading a user specified certificate in my app? The only thing I can come up with is providing some interface where the user can give an URL to his .p12 cerificate, which I can then load into the app's keychain for later use, but thats not exactly user-friednly. I'm looking for something that would allow the user to put the cert on phone (email it to himself) and then load it in my app.

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • How can I make a server log file available via my ASP.NET MVC website?

    - by Gary McGill
    I have an ASP.NET MVC website that works in tandem with a Windows Service that processes file uploads. For easy maintenance of the site, I'd like the log file for the Windows Service to be accessible (to me, only) via the website, so that I can hit http://myserver/logs/myservice to view the contents of the log file. How can I do that? At a guess, I could either have the service write its log file in a "Logs" folder at the top level of the site, or I could leave it where it is and set up a virtual directory to point to it. Which of these is better - or is there another, better way? Wherever the file is stored, I can see that there's going to be another problem. I tried out the first option (Logs folder in my website), but when I try to access the file via HTTP I get an error: The process cannot access the file 'foo' because it is being used by another process. Now, I know from experience that my service keeps the file locked for writing while it's running, but that I can still open the file in Notepad to view the current contents. (I'm surprised that IIS insists on write access, if that's what's happening). How can I get around that? Do I really have to write a handler to read the file and serve it to the browser myself? Or can I fix this with configuration or somesuch? PS. I'm using IIS7 if that helps.

    Read the article

  • Importing MEF-Plugins into MVC Controllers

    - by Marks
    Hello. There are several examples of using MEF to plugin whole controller/view packages into a MVC application, but i didn't found one, using MEF to plugin funcional parts, used by other controllers. For example, think of a NewsService with a simple interface like interface INewsService { List<NewsItem> GetAllNews(); } That gets news wherever he wants, and returns them in a List of NewsItems. My page should load an exported INewsService and show the news on the page. But there is the problem. I cant just use [Import] in the controllers, as they are just created when they are needed. Edit: (Importing them to the main MVCApplication class doesn't work, becouse i cant access it from the controllers.) I think i found a way to access the main app via HttpContext.ApplicationInstance. But the Service object in this instance is null although it was created successfully in the Application_Start() method. Any idea why? So, how can i access the NewsService from within a controller? Thanks in advance, Marks

    Read the article

  • Windows Programming: ID2D1Bitmap Interface - Getting the Bitmap Data

    - by LostInBrackets
    I've been writing my own library of functions to access some of the new Direct2D Windows libraries. In particular, I've been working on the ID2D1Bitmap interface. I wanted to write a function to return a pointer to the start of the bitmap data (for the editing of particular pixels, or custom encoding or whatever else I might wish for in the future). Unfortunately... problem ahead... I can't seem to find a way to get access to the raw pixel data from the ID2D1Bitmap Interface. Does anyone have an idea how to access this? One of my friends suggested drawing the bitmap to a surface and extracting the bitmap data from there. I don't know if this would work. It definitely seems inefficient and I wouldn't know which kind of surface to use. Any help is appreciated. (c++ in particular, but I assume the code won't be tooo different between languages) (I know I could just read in the data direct from the file, but I'm using the WIC decoders which means it could be in any number of indecipherable formats)

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >