Search Results

Search found 14900 results on 596 pages for 'git remote repository'.

Page 299/596 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Easily Setup Fluent Nhibernate With Oracle

    Nowadays it is preferred to use ORM instead of old data access approaches. However, setting up an ORM like Fluent NHibernate with Oracle takes some time. With the help of NuGet you can setup such third party tools in no time. In this article I am going to to show how you can easily configure Fluent NHibernate with Oracle using NuGet. Moreover, the article will guide you in building a generic repository using Fluent NHibernate.

    Read the article

  • What's New in Oracle Supply Chain Management: Key highlights of R12.1 and new solutions (PART 2 of 2

    The latest EBS 12.1 release provides significant new capabilities in supply chain management that companies can deploy immediately to drive rapid ROI.  In addition, new solutions such as Advanced Planning Command Center, Spare Parts Planning, Demand Signal Repository and Manufacturing Operations Center enable companies to achieve operational excellence, while reducing costs and improving margins. This webcast will discuss the latest release, highlighting new capabilities and how companies can benefit from them.

    Read the article

  • What's New in Oracle Supply Chain Management: Key highlights of R12.1 and new solutions (PART 1 of 2

    The latest EBS 12.1 release provides significant new capabilities in supply chain management that companies can deploy immediately to drive rapid ROI.  In addition, new solutions such as Advanced Planning Command Center, Spare Parts Planning, Demand Signal Repository and Manufacturing Operations Center enable companies to achieve operational excellence, while reducing costs and improving margins. This webcast will discuss the latest release, highlighting new capabilities and how companies can benefit from them.

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

  • ASP.NET MVC, Web API, Razor, e Open Source (Código Aberto)

    - by Leniel Macaferi
    A Microsoft tornou o código fonte da ASP.NET MVC disponível sob uma licença open source (de código aberto) desde a primeira versão V1. Nós também integramos uma série de grandes tecnologias de código aberto no produto, e agora entregamos jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js e JSON.NET como parte integrante dos lançamentos da ASP.NET MVC. Estou muito animado para anunciar hoje que também iremos liberar o código fonte da ASP.NET Web API e ASP.NET Web Pages (também conhecido como Razor) sob uma licença open source (Apache 2.0), e que iremos aumentar a transparência do desenvolvimento de todos os três projetos hospedando seus repositórios de código no CodePlex (usando o novo suporte ao Git anunciado na semana passada - em Inglês). Isso permitirá um modelo de desenvolvimento mais aberto, onde toda a comunidade será capaz de participar e fornecer feedback nos checkins (envios de código), corrigir bugs, desenvolver novos recursos, e construir e testar os produtos diariamente usando a versão do código-fonte e testes mais atualizada possível. Nós também pela primeira vez permitiremos que os desenvolvedores de fora da Microsoft enviem correções e contribuições de código que a equipe de desenvolvimento da Microsoft irá rever para potencial inclusão nos produtos. Nós anunciamos uma abordagem de desenvolvimento semelhantemente aberta com o Windows Azure SDK em Dezembro passado, e achamos que essa abordagem é um ótimo caminho para estreitar as relações, pois permite um excelente ciclo de feedback com os desenvolvedores - e, finalmente, permite a entrega de produtos ainda melhores, como resultado. Muito importante - ASP.NET MVC, Web API e o Razor continuarão a ser totalmente produtos suportados pela Microsoft que são lançados tanto independentemente, bem como parte do Visual Studio (exatamente da mesma maneira como é feito hoje em dia). Eles também continuarão a ser desenvolvidos pelos mesmos desenvolvedores da Microsoft que os constroem hoje (na verdade, temos agora muito mais desenvolvedores da Microsoft trabalhando na equipe da ASP.NET). Nosso objetivo com o anúncio de hoje é aumentar ainda mais o ciclo de feedback/retorno sobre os produtos, para nos permitir oferecer produtos ainda melhores. Estamos realmente entusiasmados com as melhorias que isso trará. Saiba mais Agora você pode navegar, sincronizar e construir a árvore de código fonte da ASP.NET MVC, Web API, e Razor através do website http://aspnetwebstack.codeplex.com.  O repositório Git atual no site refere-se à árvore de desenvolvimento do marco RC (release candidate/candidata a lançamento) na qual equipe vem trabalhando nas últimas semanas, e esta mesma árvore contém ambos o código fonte e os testes, e pode ser construída e testada por qualquer pessoa. Devido aos binários produzidos serem bin-deployable (DLLs instaladas diretamente na pasta bin sem demais dependências), isto permite a você compilar seus próprios builds e experimentar as atualizações do produto, tão logo elas sejam adicionadas no repositório. Agora você também pode contribuir diretamente para o desenvolvimento dos produtos através da revisão e envio de feedback sobre os checkins de código, enviando bugs e ajudando-nos a verificar as correções tão logo elas sejam enviadas para o repositório, sugerindo e dando feedback sobre os novos recursos enquanto eles são implementados, bem como enviando suas próprias correções ou contribuições de código. Note que todas as submissões de código serão rigorosamente analisadas ??e testadas pelo Time da ASP.NET MVC, e apenas aquelas que atenderem a um padrão elevado de qualidade e adequação ao roadmap (roteiro) definido para as próximas versões serão incorporadas ao código fonte do produto. Sumário Todos nós da equipe estamos realmente entusiasmados com o anúncio de hoje - isto é algo no qual nós estivemos trabalhando por muitos anos. O estreitamento no relacionamento entre a comunidade e os desenvolvedores nos permitirá construir produtos ainda melhores levando a ASP.NET para o próximo nível em termos de inovação e foco no cliente. Obrigado! Scott P.S. Além do blog, eu uso o Twitter para disponibilizar posts rápidos e para compartilhar links. Meu apelido no Twitter é: @scottgu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • ASP.NET MVC, Web API, Razor and Open Source

    - by ScottGu
    Microsoft has made the source code of ASP.NET MVC available under an open source license since the first V1 release. We’ve also integrated a number of great open source technologies into the product, and now ship jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js and JSON.NET as part of it. I’m very excited to announce today that we will also release the source code for ASP.NET Web API and ASP.NET Web Pages (aka Razor) under an open source license (Apache 2.0), and that we will increase the development transparency of all three projects by hosting their code repositories on CodePlex (using the new Git support announced last week). Doing so will enable a more open development model where everyone in the community will be able to engage and provide feedback on code checkins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code and tests. We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products. We announced a similar open development approach with the Windows Azure SDK last December, and have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Very importantly - ASP.NET MVC, Web API and Razor will continue to be fully supported Microsoft products that ship both standalone as well as part of Visual Studio (the same as they do today). They will also continue to be staffed by the same Microsoft developers that build them today (in fact, we have more Microsoft developers working on the ASP.NET team now than ever before). Our goal with today’s announcement is to increase the feedback loop on the products even more, and allow us to deliver even better products.  We are really excited about the improvements this will bring. Learn More You can now browse, sync and build the source tree of ASP.NET MVC, Web API, and Razor on the http://aspnetwebstack.codeplex.com web-site.  The Git repository on the site is the live RC milestone development tree that the team has been working on the last several weeks, and the tree contains both the runtime sources + tests, and is buildable and testable by anyone.  Because the binaries produced are bin-deployable, this allows you to compile your own builds and try product updates out as soon as they are checked-in. You can also now contribute directly to the development of the products by reviewing and sending feedback on code checkins, submitting bugs and helping us verify fixes as they are checked in, suggesting and giving feedback on new features as they are implemented, as well as by submitting code fixes or code contributions of your own. Note that all code submissions will be rigorously reviewed and tested by the ASP.NET MVC Team, and only those that meet an extremely high bar for both quality and design/roadmap appropriateness will be merged into the source. Summary All of us on the team are really excited about today’s announcement – it has been something we’ve been working toward for many years.  The tighter feedback loop is going to enable us to build even better products, and take ASP.NET to the next level in terms of innovation and customer focus. Thanks, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Create a Slide Show in Windows 7 Media Center

    - by DigitalGeekery
    Are you looking for a nice way to create and display a slide show from your photo collection? Today we’ll show you how to create a slide show, how to add music to it, and watch it from the comfort of your couch in Windows 7 Media Center. Create Slide Show Launch Windows 7 Media Center and click on the Picture Library tile found under Pictures and Videos.   In the Pictures Library, scroll across to slide shows and click on Create Slide show.   Enter a name for the slide show and click Next.   If you are using a Windows Media Center remote, click on the OK button to bring up the onscreen keyboard. Use the directional buttons to navigate across the keyboard and press OK to select each letter. Click Done when finished. Select Picture Library and click Next. Select the pictures to include in your slide show. If using a remote, navigate through the images and press OK to select. If you are using a mouse, simply click on the selections. When you are finished, click Next.    Now, we can review and edit the slide show. Click the up or down pointing arrows to move pictures up and down in the order.  (more intuitive titles would be helpful in this case as opposed to the randomly generated titles in the example below) If you are finished, click Create. You can also choose to go back and add music to your slide show. (or even more pictures) We’ll take a look at adding some music in our example. Click on the Add More button.   Add Music to Your Slide Show Here we’ll select Music Library to add a song. Click Next.   You’ll now be able to browse your Music Library to select songs for your slide show. Select your songs and click Next.   When you are finished adding Music and Pictures click Create.   Once your slide show is saved, you can play it any time by going to clicking on slide shows in the Picture Library, then selecting the slide show title. Select play slide show when you’re ready to enjoy your new production.   If you ever want to edit or delete the slide show, select it in the Picture Library, and scroll to Actions. You’ll see those option under additional commands. You have the option to Edit Slide Show, Burn a CD/DVD, or Delete. Editing Slide Show Settings Within Media Center, go to Tasks… Click on Pictures…   Then choose Slide Shows. From the Slide Show settings you have the option to Show pictures in random order, Show picture information, Show song information, and Use Pan and zoom effect. You can also adjust the length of time to display each picture, and change the background color. Be sure to click Save to apply and changes before exiting. If you choose to show picture information, the picture title, date, and star rating will be displayed in the top right.   If your slide show is accompanied by music and you choose to show song information, you will get a translucent overlay for a few seconds at the beginning of each song to indicate the song, album, and artist. One of the really cool things about creating a slide show in Windows 7 Media Center is you can complete the entire process using just a Media Center remote. Can’t get enough slide shows? Check out how to turn your desktop into a picture slide show in Windows 7. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Add Color Coding to Windows 7 Media Center Program GuideIntegrate Boxee with Media Center in Windows 7Schedule Updates for Windows Media CenterTurn Your Desktop into a Picture Slideshow in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Rspec2, Rails3, Authlogic: Can't run specs

    - by Sam
    When I do rspec spec in my rails project, I get No examples were matched. Perhaps {:if=>#<Proc:0x0000010126e998@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:50 (lambda)>, :unless=>#<Proc:0x0000010126e970@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:51 (lambda)>} is excluding everything? Finished in 0.00004 seconds 0 examples, 0 failures Now, this seems like maybe if I wrote a spec it would work, but as soon as I write a spec (and I do include spec_helper) /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) from /{myapp}/app/models/user_session.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:138:in `block (2 levels) in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `block in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:108:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application/finisher.rb:41:in `block in <module:Finisher>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `instance_exec' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:50:in `block in run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:134:in `initialize!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:77:in `method_missing' from /{myapp}/config/environment.rb:5:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/spec_helper.rb:3:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/controllers/pages_controller_spec.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `block in load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `map' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/command_line.rb:18:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:55:in `run_in_process' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:46:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:10:in `block in autorun' The important line here seems to be /core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) Now if this were rails 2.3.8, I'd simply put config.gem "authlogic" into the environment.rb, in the initialization code block. However, the rails 3 environment.rb looks way different (there is no config code block, so putting it in arbitrarily causes an error where config is not defined). So my questions are 1) Do I actually have to put the gem config anywhere? I looked at https://github.com/trevmex/authlogic_rails3_example/ and it seems he didn't put it anywhere. 2) Does anyone know what I'm doing wrong in terms of rspec? My gem list is *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) arel (2.0.6, 1.0.1) asdf (0.5.0) authlogic (2.1.6, 2.1.3) autotest (4.4.6, 4.4.1) autotest-fsevent (0.2.4) autotest-growl (0.2.9) autotest-rails (4.1.0) autotest-rails-pure (4.1.2) bluecloth (2.0.9) builder (2.1.2) bundler (1.0.7, 1.0.2) cgi_multipart_eof_fix (2.5.0) commonwatir (1.6.2) couchrest (0.33) cri (1.0.1) cucumber (0.4.4, 0.4.3, 0.3.11) daemons (1.1.0, 1.0.10) dependencies (0.0.7) diff-lcs (1.1.2) erubis (2.6.6) fastercsv (1.5.0) fastthread (1.0.7) firewatir (1.6.2) flay (1.4.0) flog (2.2.0) funfx (0.2.2) gem_plugin (0.2.3) gemsonrails (0.7.2) giraffesoft-resource_controller (0.6.5) haml (2.2.14) hoe (2.3.3) i18n (0.4.1) jscruggs-metric_fu (1.1.5) json_pure (1.1.9) kramdown (0.12.0) mail (2.2.13, 2.2.6.1) memcache-client (1.8.5) mime-types (1.16) mojombo-chronic (0.3.0) mongrel (1.1.5) monk (0.0.7) nanoc (3.1.5) nanoc3 (3.1.5) nokogiri (1.4.3.1, 1.4.0) open4 (0.9.6) polyglot (0.3.1, 0.2.9) rack (1.2.1, 1.0.1) rack-mount (0.6.13) rack-test (0.5.6) rails (3.0.0, 2.3.4) rails3-generators (0.17.0, 0.14.0) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) relevance-rcov (0.9.2.1) rest-client (1.0.3) rspec (2.3.0, 2.0.0.rc, 1.2.9) rspec-core (2.3.1, 2.0.0.rc) rspec-expectations (2.3.0, 2.0.0.rc) rspec-mocks (2.3.0, 2.0.0.rc) rspec-rails (2.3.1, 2.0.0.rc, 1.2.9) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rvm (1.0.13) s4t-utils (1.0.4) safariwatir (0.3.7) sexp_processor (3.0.3) spork (0.7.3) sqlite3-ruby (1.3.1, 1.2.5) sys-uname (0.8.5) term-ansicolor (1.0.4) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.3, 0.12.0) treetop (1.4.8, 1.4.2) tzinfo (0.3.23) user-choices (1.1.6) vlad (2.0.0) vlad-git (2.1.0) webrat (0.7.1, 0.6.0, 0.5.3) xml-simple (1.0.12) ZenTest (4.4.2) I am using ruby 1.9.2 and rails 3.0.3 installed using RVM on OSX 10.6 Snow Leopard. I just want to be able to run my specs like I used to. As a separate issue, autotest yields an error about an include for autotest/growl but I installed autotest-growl. Maybe this is a gem issue? I tried doing the same things and get the same error when it comes to using my ubuntu 10.04 server machine though. Gemfile source 'http://rubygems.org' gem 'rails', '3.0.3' # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' group :couch do gem 'couchrest' end group :user_auth do gem 'authlogic' gem "rails3-generators" gem 'facebooker' end group :markup do gem 'haml' gem 'sass' end group :testing do gem 'rspec-rails' gem 'rspec' gem 'webrat' gem 'cucumber' gem 'capybara' gem 'factory_girl' gem 'shoulda' gem 'autotest' end group :server do gem 'unicorn' end # Use unicorn as the web server # gem 'unicorn' # Deploy with Capistrano # gem 'capistrano' # To use debugger # gem 'ruby-debug' # Bundle the extra gems: # gem 'bj' # gem 'nokogiri' # gem 'sqlite3-ruby', :require => 'sqlite3' # gem 'aws-s3', :require => 'aws/s3' # Bundle gems for the local environment. Make sure to # put test-only gems in this group so their generators # and rake tasks are available in development mode: # group :development, :test do # gem 'webrat' # end Gemfile.lock GEM remote: http://rubygems.org/ specs: ZenTest (4.4.2) abstract (1.0.0) actionmailer (3.0.3) actionpack (= 3.0.3) mail (~> 2.2.9) actionpack (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) erubis (~> 2.6.6) i18n (~> 0.4) rack (~> 1.2.1) rack-mount (~> 0.6.13) rack-test (~> 0.5.6) tzinfo (~> 0.3.23) activemodel (3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) i18n (~> 0.4) activerecord (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) arel (~> 2.0.2) tzinfo (~> 0.3.23) activeresource (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) activesupport (3.0.3) arel (2.0.6) authlogic (2.1.6) activesupport autotest (4.4.6) ZenTest (>= 4.4.1) builder (2.1.2) capybara (0.4.0) celerity (>= 0.7.9) culerity (>= 0.2.4) mime-types (>= 1.16) nokogiri (>= 1.3.3) rack (>= 1.0.0) rack-test (>= 0.5.4) selenium-webdriver (>= 0.0.27) xpath (~> 0.1.2) celerity (0.8.6) childprocess (0.1.6) ffi (~> 0.6.3) couchrest (1.0.1) json (>= 1.4.6) mime-types (>= 1.15) rest-client (>= 1.5.1) cucumber (0.10.0) builder (>= 2.1.2) diff-lcs (~> 1.1.2) gherkin (~> 2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) culerity (0.2.13) diff-lcs (1.1.2) erubis (2.6.6) abstract (>= 1.0.0) facebooker (1.0.75) json_pure (>= 1.0.0) factory_girl (1.3.2) ffi (0.6.3) rake (>= 0.8.7) gherkin (2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) haml (3.0.25) i18n (0.5.0) json (1.4.6) json_pure (1.4.6) kgio (2.0.0) mail (2.2.13) activesupport (>= 2.3.6) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) mime-types (1.16) nokogiri (1.4.4) polyglot (0.3.1) rack (1.2.1) rack-mount (0.6.13) rack (>= 1.0.0) rack-test (0.5.6) rack (>= 1.0) rails (3.0.3) actionmailer (= 3.0.3) actionpack (= 3.0.3) activerecord (= 3.0.3) activeresource (= 3.0.3) activesupport (= 3.0.3) bundler (~> 1.0) railties (= 3.0.3) rails3-generators (0.17.0) railties (>= 3.0.0) railties (3.0.3) actionpack (= 3.0.3) activesupport (= 3.0.3) rake (>= 0.8.7) thor (~> 0.14.4) rake (0.8.7) rest-client (1.6.1) mime-types (>= 1.16) rspec (2.3.0) rspec-core (~> 2.3.0) rspec-expectations (~> 2.3.0) rspec-mocks (~> 2.3.0) rspec-core (2.3.1) rspec-expectations (2.3.0) diff-lcs (~> 1.1.2) rspec-mocks (2.3.0) rspec-rails (2.3.1) actionpack (~> 3.0) activesupport (~> 3.0) railties (~> 3.0) rspec (~> 2.3.0) rubyzip (0.9.4) sass (3.1.0.alpha.206) selenium-webdriver (0.1.2) childprocess (~> 0.1.5) ffi (~> 0.6.3) json_pure rubyzip shoulda (2.11.3) sqlite3-ruby (1.3.2) term-ansicolor (1.0.5) thor (0.14.6) treetop (1.4.9) polyglot (>= 0.3.1) tzinfo (0.3.23) unicorn (3.1.0) kgio (~> 2.0.0) rack webrat (0.7.2) nokogiri (>= 1.2.0) rack (>= 1.0) rack-test (>= 0.5.3) xpath (0.1.2) nokogiri (~> 1.3) PLATFORMS ruby DEPENDENCIES authlogic autotest capybara couchrest cucumber facebooker factory_girl haml rails (= 3.0.3) rails3-generators rspec rspec-rails sass shoulda sqlite3-ruby unicorn webrat

    Read the article

  • Heroku and Github integration (how to structure the project)

    - by Noah
    I'm creating a webservice and I want to store the source on github and run the app on heroku. I haven't seen my exact scenario addressed anywhere on the 'net so far, so I'll ask it here: I want to have the following directory structure: /project .git README <-- project readme file TODO.otl <-- project outline ... <-- other project-related stuff /my_rails_app app config ... README <-- rails' readme file In the above, project corresponds to http://github.com/myuser/project, and my_rails_app is the code that should be pushed to heroku. Do I need a separate branch for the rails app, or is there a simpler way that I'm missing? I guess my project-related non-rails files could live in my_rails_app, but the rails README already lives there and it seems inconsistent to overwrite that. However, if I leave it, my github page for the rails app will contain the rails readme, which makes no sense. Thanks, Noah P.S. I tried just setting it up as described above and running git push heroku from the main project folder. Of course, heroku doesn't know I want to deploy the subfolder: -----> Heroku receiving push ! Heroku push rejected, no Rails or Rack app detected.

    Read the article

  • "Pretty" Continuous Integration for Python

    - by dbr
    This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. phpUnderControl Hudson CruiseControl.rb ..and others, BuildBot looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiney graphs and the likes? Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: Download Hudson from https://hudson.dev.java.net/ Run it with java -jar hudson.war Open the web interface on the default address of http://localhost:8080 Go to Manage Hudson, Plugins, click "Update" or similar Install the Git plugin (I had to set the git path in the Hudson global preferences) Create a new project, enter the repository, SCM polling intervals and so on Install nosetests via easy_install if it's not already In the a build step, add nosetests --with-xunit --verbose Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects: SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)

    Read the article

  • Good DB Migrations for CakePHP?

    - by Martin Westin
    Hi, I have been trying a few migration scripts for CakePHP but I ran into problems with all of the in some form or another. Please advice me on a migration option for Cake that you use live and know works. I'd like the following "features": -Support CakePHP 1.2(e.g. CakeDCs migrations will only be an option when 1.3 is stable and my app migrated to the new codebase) -Support for (or at least not halt on) Models with a different database config. -Support Models in sub-folders of app/models -Support Models in plugins -Support tables that do not conform to Cake conventions (I have a few special tables that do not have a single primary key field and need to keep them) -Plays well with automated deployment via Capistrano and Git. I do not need rails-style versioned files a git versioned schema file that is compared live to the existing schema will do. That is: I like the SchemaShell in Cake apart from it not being compatible with most of my requirements above. I have looked at and tested: CakePHP Schema Shell http://book.cakephp.org/view/734/Schema-management-and-migrations CakeDC migrations http://cakedc.com/downloads/view/cakephp_migrations_plugin YAML migrations http://github.com/georgious/cakephp-yaml-migrations-and-fixtures joelmoss migrations http://code.google.com/p/cakephp-migrations

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Help setting up command line gist

    - by smotchkkiss
    setup I'm following defunkt's gist setup guide. [smotchkkiss ~]$ sudo gem install gist [smotchkkiss ~]$ git config --global github.user "my github name" [smotchkkiss ~]$ git config --global github.token "my github token" [smotchkkiss ~]$ echo "puts 'hello, gist.'" > hello.rb [smotchkkiss ~]$ gist hello.rb output Usage: open [-e] [-t] [-f] [-W] [-n] [-g] [-h] [-b <bundle identifier>] [-a <application>] [filenames] Help: Open opens files from a shell. By default, opens each file using the default application for that file. If the file is in the form of a URL, the file will be opened as a URL. Options: -a Opens with the specified application. -b Opens with the specified application bundle identifier. -e Opens with TextEdit. -t Opens with default text editor. -f Reads input from standard input and opens with TextEdit. -W, --wait-apps Blocks until the used applications are closed (even if they were already running). -n, --new Open a new instance of the application even if one is already running. -g, --background Does not bring the application to the foreground. -h, --header Searches header file locations for headers matching the given filenames, and opens them. return value nil help! nil return value? What gives? No new gist appears in my My Gists page on github.

    Read the article

  • Industry-style practices for increasing productivity in a small scientific environment

    - by drachenfels
    Hi, I work in a small, independent scientific lab in a university in the United States, and it has come to my notice that, compared with a lot of practices that are ostensibly followed in the industry, like daily checkout into a version control system, use of a single IDE/editor for all languages (like emacs), etc, we follow rather shoddy programming practices. So, I was thinking of getting together all my programs, scripts, etc, and building a streamlined environment to increase productivity. I'd like suggestions from people on Stack Overflow for the same. Here is my primary plan.: I use MATLAB, C and Python scripts, and I'd like to edit, compile them from a single editor, and ensure correct version control. (questions/things for which I'd like suggestions are in italics) 1] Install Cygwin, and get it to work well with Windows so I can use git or a similar version control system (is there a DVCS which can work directly from the windows CLI, so I can skip the Cygwin step?). 2] Set up emacs to work with C, Python, and MATLAB files, so I can edit and compile all three at once from a single editor (say, emacs) (I'm not very familiar with the emacs menu, but is there a way to set the path to the compiler for certain languages? I know I can Google this, but emacs documentation has proved very hard for me to read so far, so I'd appreciate it if someone told me in simple language) 3] Start checking in code at the end of each day or half-day so as to maintain a proper path of progress of my code (two questions), can you checkout files directly from emacs? is there a way to checkout LabVIEW files into a DVCS like git? Lastly, I'd like to apologize for the rather vague nature of the question, and hope I shall learn to ask better questions over time. I'd appreciate it if people gave their suggestions, though, and point to any resources which may help me learn.

    Read the article

  • How to architect Rails site that can be edited while running?

    - by Chris Kimpton
    Hi, I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting. But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site. The actual scraping thread happens async to the website, using the daemons gem. My requirements are: Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins. Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem. Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead. I am thinking my options are: Store the code in the db (with a history table for the old versions) Store the code in a private git repo somewhere and access that for the history/latest versions. I am thinking the git route might be easiest, given its raison d'etre is to track file history... But perhaps there is a gem/plugin that does all this for me, out of the box? Thanks in advance for any tips/advice. ~chris

    Read the article

  • delayed_job :run_at is not working. all jobs set to run at current time

    - by jtwg
    I have installed the collectiveidea fork for delayed_job at git://github.com/collectiveidea/delayed_job.git but cannot get it to accept :run_at from my gemfile gem 'rails', '3.2.2' gem 'delayed_job_active_record' when I try it in the console 1.9.2-p318 :005 > Time.now => 2012-03-24 10:20:34 -0700 1.9.2-p318 :006 > User.delay.new :run_at => 5.days.from_now SQL (0.1ms) BEGIN SQL (1.6ms) INSERT INTO `delayed_jobs` (`attempts`, `created_at`, `failed_at`, `handler`, `last_error`, `locked_at`, `locked_by`, `priority`, `run_at`, `updated_at`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [["attempts", 0], ["created_at", Sat, 24 Mar 2012 17:20:36 UTC +00:00], ["failed_at", nil], ["handler", "--- !ruby/object:Delayed::PerformableMethod\nobject: !ruby/class 'User'\nmethod_name: :new\nargs:\n- :run_at: 2012-03-29 17:20:36.876374000Z\n"], ["last_error", nil], ["locked_at", nil], ["locked_by", nil], ["priority", 0], ["run_at", Sat, 24 Mar 2012 17:20:36 UTC +00:00], ["updated_at", Sat, 24 Mar 2012 17:20:36 UTC +00:00]] (2.7ms) COMMIT => #<Delayed::Backend::ActiveRecord::Job id: 17, priority: 0, attempts: 0, handler: "--- !ruby/object:Delayed::PerformableMethod\nobject:...", last_error: nil, run_at: "2012-03-24 17:20:36", locked_at: nil, failed_at: nil, locked_by: nil, created_at: "2012-03-24 17:20:36", updated_at: "2012-03-24 17:20:36"> I see there is some UTC offset in the runtime, but based on Time.now, I can tell run_at is not going forward by 5 days. "run_at", Sat, 24 Mar 2012 17:20:36 UTC +00:00 Any ideas?

    Read the article

  • Vim, LaTeX, and version controlI

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >