Search Results

Search found 318 results on 13 pages for 'gokhan sever'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Get Internal IP Address From DHCP Hostname

    - by ell
    I would like to try and get an internal ip address of one of the computers on my network. The reason for this is I have a little home server box downstairs but every time I want to SSH into it I have to open my router configuration and go on the DHCP client table and look at the IP address. For example I would like to be able to go ssh ell-sever instead of ssh 192.168.1.105 or whatever it happens to be. My network configuration is like so: Router downstairs that is connected to the Internet and is running a DHCP server My server computer (ell-server) is a headless pc connected to the router via ethernet cable. Running Ubuntu 11.04 Server Edition My laptop upstairs (ell-laptop) that is running Ubuntu 11.10 Desktop Edition connected wirelessly Other (irrelevant) computers - 2 x Windows XP, 1 x Xubuntu - all connected with cables. (It seemed to me the method of connection isn't useful information but I put it in anyway - just in case. If I have missed any information please tell me) Do I have to run a DNS server on one of my computers? If so which one? And does that mean I will have to run a DDNS client on each computer? Thanks in advance, ell.

    Read the article

  • SSH into remote server using Public-private keys

    - by maria
    Hi, I have recently setup ssh on two linux machines (lets call them server-a, client-b). I have generated two ssh auth files on client-b machine using ssh key gen and can see both public and private files in .ssh dir. I have named them 'example' and 'example.pub'. Then I have added example.pub to sever-a's auth file. When I try to ssh into server-a it still requests a password authentication where as I want a password less login (private key on client-b is setup without password). When I try to ssh with '-v' .. get the following output: debug1: Next authentication method: publickey debug1: Trying private key: /Users/abc/.ssh/identity debug1: Offering public key: /Users/abc/.ssh/id_rsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering public key: /Users/abc/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Password: Please help.

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Trying to connect simple VB6 ADO to SQL Server 2008

    - by Henry
    We have a VB6 App that -for current purposes - is very basic ADO: Dim a New ADODB.Recorset, set some basic properties like Cursor Location and Lock Type, set a Connection String and a .Source like "Select * from CustomerMaster", and .OPEN - nothing fancy here! Yet, on a new SBS installation with SQL Server 2008 across 2 Servers (one for Apps, the other dedicated to SQL!), it dies/hangs/crashes if you try to run such a Query from anything but the SQL Sever box. Initially, we were using the SQLOLEDB.1 driver, which would crash/hang the entire SQL Server after about 4 such queries (built a simple 6 line App just for this purpose). Then switched to the NATIVE SQL driver, which did allow us unlimited, happy queries - until you did the first Change/Update - THEN it would corrupt the SQL Server if you exited and tried to go back in. All this 'corruption' is happening from the 'App Server' of the SBS pair, and I presume that the App Server (also installed in tandem with the SQL server this week) has the latest MDACs, etc. And running it from a 'lowly XP workstation' is (obviously) no better. ANY ideas??? -Henry

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • Synergy Windows 7 Screen Saver issue

    - by SynergyUser
    I have synergy server running on a Windows 7 laptop, I have another laptop running windows XP as the Client. When synergy is first started the Screen Saver will start after the 5 mins of inactivity no problem but after mouse/keyboard input then waiting another 5 minutes + the screen saver will not come back on. The mouse is on the server NOT the client. I have tried unchecking "sync screen savers" I have tried right clicking and running the synergy server as administrator. I have tried versions 1.3.4, 1.3.6, 1.4.2 both 32 bit and 64 bit. I have tried running in XP compatibility mode. I have tried disconnecting the client. It feels like I've tried everything. The setup is as follows: The server is below the Client, the client is above the server. The server has a secondary monitor attached to it and yes I've tried running without the secondary monitor. the sever is an acer 5552G-5828. Any help would be awesome! I love synergy but without the screensavers it is annoying. Thanks!

    Read the article

  • How to kill ostensibly immortal process?

    - by DeeDee
    I had some huge file transfers operating on an NFS mount. The server on which the mount point resided was carelessly rebooted, and now the server from which these large transfers were initiated seems to be bogged down by them. If I run top, I see the following: The first thing I tried was to run kill with each the -1 -2 -9 and -15 flags, and each of the process ids shown above in turn. This allowed me to proceed, but didn't kill the processes. The next thing I attempted was to reboot the server, but neither reboot nor shutdown -r now worked. When I ran shutdown -r now the standard broadcast message was sent out, but the sever did not reboot. I confirmed this by looking at the server uptime, which was 25 days. So now I'm a little stuck. I'm running these commands as root. EDIT: Here's another interesting tidbit: In top, I don't see that any other processes are using more than a fraction of a percent of memory or more than 5% of CPU. EDIT 2: output of /var/log/messages

    Read the article

  • Windows Share permissions

    - by Armando
    I have a SQL/File server that I am replicating a File Share and SQL instance, using ArcServer RHA, to a Replica server. Everything seems to work as far as the replication of the SLQ instance and Share is concerned. When I fail over to the Replica server the DNS Host A record is modified to point to the Replica Server IP Address so if I do an NSLOOKUP on ServerA it then points to the IP Address of ServerB. Ans since the SQL instance is named the same I can still map my ODBC connections to ServerA and I can still make a SQL connection. When I try to do a \ServerA\Share I get an error that says I do not have permissions to the Share. I think this is because it uses keberose authentication and the Share is tied to the actual sever host name. I have tried puting in a CNAME and pointing it to ServerA and Disabling Strict Name Checking on ServerB as well as adding the CNAME to the OptionalNames in the registry but I am still getting the error when I have the ServerA powered off. Is there a way to reset the Authentication of the Share to use the DNS Cname?

    Read the article

  • VNC Server that can be used from command line?

    - by jesusiniesta
    I'm looking for a replacement for a custom vnc server that we have been using in my company for a long time. I need a simple executable that can be run from command line by an IT Support software without the user noticing it (our application will warn the user, we don't want him to see we are using that VNC sever). I need it to support Windows and preferably also OSX. The only option I've found is UltraVNC, but I can't configure it from command line to accept loopback connections without authentication. We have already a whole VNC Viewer + VNC Repeater + Bouncers architecture, and the only missing piece is the VNC Server. Do you know any solution you could suggest me? I'm afraid I'll end up developing a new VNC server myself, may be based on an open source one. EDIT: When I said I don't want the user to notice this VNC server, I should have added that I don't want him even noticing the installation. So better if it can be installed silently or can be executed as a portable executalbe (for instance, ultravnc can be installed and ran as a service from command line, or simply executed quietly, with only a notification icon; its problem is that I can't run it without authentication).

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • How do I know if I managed to completely remove an undetected trojan?

    - by ubuntuisbetter
    I catched a trojan that uses explorer.exe to reproduce itself in case of deletion of its autostart entry or main exe file in Programs/x. It had already tried to contact a suspicious server over explorer.exe, blocked that via my firewall. I: Removed the autostart entries from the registry Looked through my services if there was anything suspicious Deleted the trojan from Programs/ Went through System Volume Information to find a 2 month old explorer.exe and replaced the possibly infected one. There are no suspicious processes running now anymore (no duplicate explorer.exe) and nothing wants to connect this trojan owners sever either. I checked my system with several anti-malware programs too. What the trojan did: Started a second explorer.exe Always when I deleted the main trojan exe file it was reproduced (by the second explorer.exe) Always when I deleted the autostart entry it was reproduced by the explorer.exe too. When I terminated the suspicious explorer.exe, which used only half as much memory as the less suspicious one from Windows, a strange thing that I know from the computers in my Informatics class happened: A window popped up in the top left of my explorer-less desktop, titled "Personal settings for ... are ..." that obviously copied some files. Then both explorer.exes started again and the trojan was everywhere again. What did the trojan actually do to get explorer to rescue it? Is my PC clean of this newish trojan now? What are the other locations I should check for the trojan? The trjoan doesn't seem very high-level, could it have changed other system files or is the autostart entry vital for it? Thanks in advance, Your trojan paranoid friend (Getting linux in a week)

    Read the article

  • Compare cells in two different spreadsheets and extract data from one an place it in the other if match found

    - by Fergie
    I need to find a way to compare two spreadsheets and if there is a match on specific cells, pull data from one sheet to another. Say the two spreadsheets contain a value that identifies a piece of equipment: spreadsheet 1 spreadsheet 2 Server Server Serial # 123abc 123abc 123-xx-456 There are of course many, many records/rows in each sheet. I need to look at the first cell in the server column of sheet 1 and then search a range of cells in the sever column of sheet 2 for a match. If there is a match, I need to pull the serial # value from the cell in the matching row an put it into the serial # cell of the matching row in sheet 1 (all of the "serial #" cells in sheet 1 are presently empty.) If that description explaination is too convoluted I can explain by answering any questions you may have. My deadline for this task is Noon tomorrow, 30 Aug 2012. Yes, I got the task today at noon.... I am not an Excel user and just get thrust into it on occassion... Any help would be a huge assist.

    Read the article

  • career in Mobile sw/Application Development [closed]

    - by pramod
    i m planning to do a course on Wireless & mobile computing.The syllabus are given below.Please check & let me know whether its worth to do.How is the job prospects after that.I m a fresher & from electronic Engg.The modules are- *Wireless and Mobile Computing (WiMC) – Modules* C, C++ Programming and Data Structures 100 Hours C Revision C, C++ programming tools on linux(Vi editor, gdb etc.) OOP concepts Programming constructs Functions Access Specifiers Classes and Objects Overloading Inheritance Polymorphism Templates Data Structures in C++ Arrays, stacks, Queues, Linked Lists( Singly, Doubly, Circular) Trees, Threaded trees, AVL Trees Graphs, Sorting (bubble, Quick, Heap , Merge) System Development Methodology 18 Hours Software life cycle and various life cycle models Project Management Software: A Process Various Phases in s/w Development Risk Analysis and Management Software Quality Assurance Introduction to Coding Standards Software Project Management Testing Strategies and Tactics Project Management and Introduction to Risk Management Java Programming 110 Hours Data Types, Operators and Language Constructs Classes and Objects, Inner Classes and Inheritance Inheritance Interface and Package Exceptions Threads Java.lang Java.util Java.awt Java.io Java.applet Java.swing XML, XSL, DTD Java n/w programming Introduction to servlet Mobile and Wireless Technologies 30 Hours Basics of Wireless Technologies Cellular Communication: Single cell systems, multi-cell systems, frequency reuse, analog cellular systems, digital cellular systems GSM standard: Mobile Station, BTS, BSC, MSC, SMS sever, call processing and protocols CDMA standard: spread spectrum technologies, 2.5G and 3G Systems: HSCSD, GPRS, W-CDMA/UMTS,3GPP and international roaming, Multimedia services CDMA based cellular mobile communication systems Wireless Personal Area Networks: Bluetooth, IEEE 802.11a/b/g standards Mobile Handset Device Interfacing: Data Cables, IrDA, Bluetooth, Touch- Screen Interfacing Wireless Security, Telemetry Java Wireless Programming and Applications Development(J2ME) 100 Hours J2ME Architecture The CLDC and the KVM Tools and Development Process Classification of CLDC Target Devices CLDC Collections API CLDC Streams Model MIDlets MIDlet Lifecycle MIDP Programming MIDP Event Architecture High-Level Event Handling Low-Level Event Handling The CLDC Streams Model The CLDC Networking Package The MIDP Implementation Introduction to WAP, WML Script and XHTML Introduction to Multimedia Messaging Services (MMS) Symbian Programming 60 Hours Symbian OS basics Symbian OS services Symbian OS organization GUI approaches ROM building Debugging Hardware abstraction Base porting Symbian OS reference design porting File systems Overview of Symbian OS Development – DevKits, CustKits and SDKs CodeWarrior Tool Application & UI Development Client Server Framework ECOM STDLIB in Symbian iPhone Programming 80 Hours Introducing iPhone core specifications Understanding iPhone input and output Designing web pages for the iPhone Capturing iPhone events Introducing the webkit CSS transforms transitions and animations Using iUI for web apps Using Canvas for web apps Building web apps with Dashcode Writing Dashcode programs Debugging iPhone web pages SDK programming for web developers An introduction to object-oriented programming Introducing the iPhone OS Using Xcode and Interface builder Programming with the SDK Toolkit OS Concepts & Linux Programming 60 Hours Operating System Concepts What is an OS? Processes Scheduling & Synchronization Memory management Virtual Memory and Paging Linux Architecture Programming in Linux Linux Shell Programming Writing Device Drivers Configuring and Building GNU Cross-tool chain Configuring and Compiling Linux Virtual File System Porting Linux on Target Hardware WinCE.NET and Database Technology 80 Hours Execution Process in .NET Environment Language Interoperability Assemblies Need of C# Operators Namespaces & Assemblies Arrays Preprocessors Delegates and Events Boxing and Unboxing Regular Expression Collections Multithreading Programming Memory Management Exceptions Handling Win Forms Working with database ASP .NET Server Controls and client-side scripts ASP .NET Web Server Controls Validation Controls Principles of database management Need of RDBMS etc Client/Server Computing RDBMS Technologies Codd’s Rules Data Models Normalization Techniques ER Diagrams Data Flow Diagrams Database recovery & backup SQL Android Application 80 Hours Introduction of android Why develop for android Android SDK features Creating android activities Fundamental android UI design Intents, adapters, dialogs Android Technique for saving data Data base in Androids Maps, Geocoding, Location based services Toast, using alarms, Instant messaging Using blue tooth Using Telephony Introducing sensor manager Managing network and wi-fi connection Advanced androids development Linux kernel security Implement AIDL Interface. Project 120 Hours

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Asp.Net membership via ASP.NET Website Administrator Tool

    - by luppi
    I created a database with aspnet_regsql, the database was created in sql sever 2008 and not in data folder in my project (do I need to move it to the folder manually?). Next, in Web Site Administration Tool I went to provider section and clicked don Test button. I got an error: Could not establish a connection to the database. If you have not yet created the SQL Server database, exit the Web Site Administration tool, use the aspnet_regsql command-line utility to create and configure the database, and then return to this tool to set the provider. Maybe I need to set something in a web.config, like membership settings or connection strings (or ASP.NET Website Administrator Tool should create those settings for me)? Update: Maybe it happens because I am using SQL server 2008 full and not express? Update 2: After setting membership section and connection string to my aspnetdb database in Web Site Administration Tool I've opened security-Security Setup Wizard-Define Roles (stage 4) I got this error: An error was encountered. Please return to the previous page and try again. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. at System.Web.Administration.WebAdminPage.CallWebAdminHelperMethod(Boolean isMembership, String methodName, Object[] parameters, Type[] paramTypes) at ASP.security_wizard_wizardpermission_ascx.OnInit(EventArgs e) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Read the article

  • Heroku Rails Internal Server Error

    - by Ryan Max
    Hello. I got a 500 Internal Sever error when I try to deploy my rails app on heroku. It works fine on my local machine, so i'm not sure what's wrong here. Seems to be something with the "sessions" on the home controller. Here is my log: ==> production.log <== # Logfile created on Sun May 09 17:35:59 -0700 2010 Processing HomeController#index (for 76.169.212.8 at 2010-05-09 17:36:00) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "sessions" does not ex ist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a .attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"sessions"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): lib/authenticated_system.rb:106:in `login_from_session' lib/authenticated_system.rb:12:in `current_user' lib/authenticated_system.rb:6:in `logged_in?' lib/authenticated_system.rb:35:in `authorized?' lib/authenticated_system.rb:53:in `login_required' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.2.7) lib/thin/connection.rb:76:in `pre_process' thin (1.2.7) lib/thin/connection.rb:74:in `catch' thin (1.2.7) lib/thin/connection.rb:74:in `pre_process' thin (1.2.7) lib/thin/connection.rb:57:in `process' thin (1.2.7) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.10) lib/eventmachine.rb:256:in `run_machine' eventmachine (0.12.10) lib/eventmachine.rb:256:in `run' thin (1.2.7) lib/thin/backends/base.rb:57:in `start' thin (1.2.7) lib/thin/server.rb:156:in `start' thin (1.2.7) lib/thin/controllers/controller.rb:80:in `start' thin (1.2.7) lib/thin/runner.rb:177:in `send' thin (1.2.7) lib/thin/runner.rb:177:in `run_command' thin (1.2.7) lib/thin/runner.rb:143:in `run!' thin (1.2.7) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/155328_f2d3c00_845e/mnt/public/500.html (500 Interna l Server Error) And here is my home_controller.rb class HomeController < ApplicationController before_filter :login_required def index @user = current_user @user.profile ||= Profile.new @profile = @user.profile end end Does it have something the way my routes are set up? Or is it my authentication? (I am using restful authentication with Bort)

    Read the article

  • Spell checker software

    - by Naren
    Hello Guys, I have been assigned a task to find a decent spell checker (UK English) preferably the free one for a project that we are doing. I have looked at Google AJAX API for this. The project contains some young person's (kids less than 18 years old) data which shouldn't allow exposing or storing outside the application boundaries. Google logs the data for research purpose that means Google owns the data whatever we send over the wire through Google API. Is this right? I fired an email to Google regarding the privacy of data and storage but they haven't come back. If you have some knowledge regarding this please share with me. At this point our servers might not have access to external entities that means we might not be able to use Web API for this over the wire. But it may change in the future. That means I have to find out some spell checker alternatives that can sit in our environment and do the job or an external APIs. Would you mind share your findings and knowledge in this regard. I would prefer free services but never know if you have some cracking spell checker for a few quid’s then I don't mind recommending to the project board. Technology using ASP.NET 3.5/4.0, MVC, jQuery, SQL Sever 2008 etc Cheers, Naren

    Read the article

  • Server won't start on using authlogic-oauth2

    - by Yahoo-Me
    I have included oauth2 and authlogic-oauth2 in the gemfile as I want to use them and am trying to start the server. It doesn't start and gives me the error: /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails.rb:44:in `configuration': undefined method `config' for nil:NilClass (NoMethodError) from /Library/Ruby/Gems/1.8/gems/authlogic_oauth2-1.1.2/lib/authlogic_oauth2.rb:14 from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `each' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `each' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler.rb:112:in `require' from /Users/arkidmitra/Documents/qorm_bzar/buyzaar/config/application.rb:7 from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:28:in `require' from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:28 from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:27:in `tap' from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:27 from script/rails:6:in `require' from script/rails:6 I am using Rails 3.0.3 and Ruby 1.8.7. Also the sever seems to be starting fine till I add gem "authlogic-oauth2" to the Gemfile.

    Read the article

  • JQUERY, AutoSuggest that doesn't kill the Server on ever keyup

    - by nobosh
    I'm working to build a JQUERY enabled AutoSuggest plugin, inspired by Apple's spotlight. Here is the general code: $(document).ready(function() { $('#q').bind('keyup', function() { if( $(this).val().length == 0) { // Hide the q-suggestions box $('#q-suggestions').fadeOut(); } else { // Show the AJAX Spinner $("#q").css("background-image","url(/images/ajax-loader.gif)"); $.ajax({ url: '/search/spotlight/', data: {"q": $(this).val()}, success: function(data) { $('#q-suggestions').fadeIn(); // Show the q-suggestions box $('#q-suggestions').html(data); // Fill the q-suggestions box // Hide the AJAX Spinner $("#q").css("background-image","url(/images/icon-search.gif)"); } }); } }); The issue I want to solve well & elegantly, is not killing the sever. Right now the code above hits the server every time you type a key and does not wait for you to essentially finish typing. What's the best way to solve this? A. Kill previous AJAX request? B. Some type of AJAX caching? C. Adding some type of delay to only submit .AJAX() when the person has stopped typing for 300ms or so? Thanks

    Read the article

  • What is more viable to use? Javascript libraries or UI Programming tools?

    - by Haresh Karkar
    What is more viable to use:- Javascript Libraries: YUI, jQuery, ExtJs OR UI Programming tools: GWT, ExtGWT, SmartGWT It has become very difficult to choose between them as they are constantly increasing their capabilities to meet newer requirements. We all know the power of jQuery in UI manipulations. The latest news from Microsoft about jQuery being officially part of .Net developr’s toolkit will definitely make jQuery a preferred choice against other JavaScript libraries [See link: http://weblogs.asp.net/scottgu/archive/2008/09/28/jquery-and-microsoft.aspx]. But on the other hand, GWT is building a framework which could be used on client as well as on the sever side. This is definitely going to make developers’ life easy as it does not require developer to be an expert in browser quirks, XMLHttpRequest, and JavaScript in order to develop high-performance web applications. It includes SDK (Java API libraries, compiler, and development server which allows to write client-side applications in Java and deploy them as JavaScript), Speed Tracer and plug-in for Eclipse. GWT is used by many products like Google Wave and AdWords. So question is still un-answered, what is more viable to use? Any thoughts?

    Read the article

  • Is there any framework for rich web clients on top of html/css?

    - by iamgopal
    Some RAD tools like openobject use rich web clients. I.e. their client side code reside inside the browser and they talk to the server via xml-rpc or json-rpc only and change the view accordingly, all the javascript and css are transferred only once. Such rich web clients would increase the productivity in enterprise class web application that have lots of processes and forms etc. I would like to use such a rich web client inside my own application. I tried to search but found only openerp-web, which is tightly integrated to its server. Is there any other rich web client framework available? if not, is there any design detail I can look into to create my own? Thanks. Edit: Browser is a client which uses http and similar protocols to talks to web server which serve pages that the client displays. Rich web client is a client which sits on top of Browser which talks to the server, send data, receive data and information about How to update the view etc and do it. Similar to Vaadin, such rich web client will eliminate any code requirement on client side and and all the coding will be done on server side. Belows are such thin clients. pjax ( jquery ) vaadin ( java ) openobject web client ( python ) nagare ( python ) seaside ( smalltalk ) p4a ( php ) this are all such clients that once properly setup will allow to code on only on sever and still provide great ajax like experience.

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >