Search Results

Search found 5334 results on 214 pages for 'django migration'.

Page 158/214 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Apache2 several sites and configuration question

    - by Hellnar
    Hello I want to host several sites from my VPS via apache2 under Debian. For this I removed the default from sites-enabled and added this under /etc/apache/sites-enabled/www.mysite.com : NameVirtualHost *:80 <VirtualHost *:80> ServerName mysite.com ServerAlias www.mysite.com Alias /media/ /home/myuser/mysite/media/ Alias /admin_media/ /home/myuser/django/Django-1.2/django/contrib/admin/media/ WSGIScriptAlias / /home/myuser/mysite/wsgi.py ErrorLog /home/myuser/mysite/logs/error.log CustomLog /home/myuser/mysite/logs/access.log combined </VirtualHost> After that I removed the default symbolic link from sites-enabled/ via: a2dissite default and added the new one: a2ensite www.mysite.com Still, I get this error: Restarting web server: apache2apache2: apr_sockaddr_info_get() failed for myuser apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Sat May 29 11:41:13 2010] [warn] NameVirtualHost *:80 has no VirtualHosts apache2: apr_sockaddr_info_get() failed for myuser apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Sat May 29 11:41:13 2010] [warn] NameVirtualHost *:80 has no VirtualHosts . What can be the problem ?

    Read the article

  • how to save nested form attributes to database

    - by siulamvictor
    I am not really understand how's the nested attributes work in Rails. I have 2 models, Accounts and Users. Accounts has_many Users. When a new user filled in the form, Rails reported User(#2164802740) expected, got Array(#2148376200) Is that Rails cannot read the nested attributes from the form? How can I fix it? How can I save the data from nested attributes form to database? Thanks all~ Here are the MVCs: Account Model class Account < ActiveRecord::Base has_many :users accepts_nested_attributes_for :users validates_presence_of :company_name, :message => "companyname is required." validates_presence_of :company_website, :message => "website is required." end User Model class User < ActiveRecord::Base belongs_to :account validates_presence_of :user_name, :message => "username too short." validates_presence_of :password, :message => "password too short." end Account Controller class AccountController < ApplicationController def new end def created end def create @account = Account.new(params[:account]) if @account.save redirect_to :action => "created" else flash[:notice] = "error!!!" render :action => "new" end end end Account/new View <h1>Account#new</h1> <% form_for :account, :url => { :action => "create" } do |f| %> <% f.fields_for :users do |ff| %> <p> <%= ff.label :user_name %><br /> <%= ff.text_field :user_name %> </p> <p> <%= ff.label :password %><br /> <%= ff.password_field :password %> </p> <% end %> <p> <%= f.label :company_name %><br /> <%= f.text_field :company_name %> </p> <p> <%= f.label :company_website %><br /> <%= f.text_field :company_website %> </p> <% end %> Account Migration class CreateAccounts < ActiveRecord::Migration def self.up create_table :accounts do |t| t.string :company_name t.string :company_website t.timestamps end end def self.down drop_table :accounts end end User Migration class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.string :user_name t.string :password t.integer :account_id t.timestamps end end def self.down drop_table :users end end Thanks everyone. :)

    Read the article

  • How to set up my belongs_to and has_many reference

    - by dagda1
    Hi, I have an ExpenseType object that I have created with the following migration: class CreateExpenseTypes < ActiveRecord::Migration def self.up create_table :expense_types do |t| t.column :name, :string, :null => false t.timestamps end end I can see the table name is the pluralised expense_types. My question is, how do I reference this type in a belongs_to relationship? Is it: belongs_to :expensetype or is it belongs_to :expense_type I do not seem able to set it up correctly. Cheers

    Read the article

  • Can jQuery retrieve a file?

    - by danspants
    I have a django application that creates xls and txt files. I'm trying to create a button which sends a jQuery.get request to django, django then returns a freshly created file to jQuery which in turn pops open a save as dialog. My code looks like this: jQuery("#testButton").live("click",function() { jQuery.jGrowl("click"); jQuery.get("/filetest",function(data){}); }); <div id="testButton" class="button">CLICK TO TEST</div> Getting the file to jQuery is easy, but I have no idea how to then raise a Save As dialog. Any assistance would be fantastic!

    Read the article

  • Static Website - Converting to Dynamic, need to import information from database on different host

    - by gvernold
    This seems really complicated to ask about so I hope someone can help: We have a long time running static website held with a hosting company that provide PHP, Ruby-on-Rails and Drupal/Joomla support. A little limited I know but we got reasonably decent search engine rankings and didn't want them to drop. We have two much more recently created sites on another host written in Python/Django. The original site is now too big to handle statically and we want to create a more dynamic site in its place without changing servers/webhosts. The data we want to provide the 'new' dynamic site is from the same database providing the Django sites. What is the best solution to build the new site with? Is it better to create PHP pages that connect to the database on the other host? Ruby-on-rails seems like a very fast development environment not too dissimilar to Django, would we be able to fetch data from the existing databases into a rails site and use similar urls to our old static pages?

    Read the article

  • uninitialized constant Encoding rake db:migrate

    - by Denis
    Hi, My RoR App use rails 2.1.2 When I run rake db:migrate --trace I get the following error, Any idea? ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! uninitialized constant Encoding /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activesupport/lib/active_support/dependencies.rb:278:in `load_missing_constant' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activesupport/lib/active_support/dependencies.rb:467:in `const_missing' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activesupport/lib/active_support/dependencies.rb:479:in `const_missing' /Library/Ruby/Gems/1.8/gems/sqlite3-0.0.8/lib/sqlite3/encoding.rb:9:in `find' /Library/Ruby/Gems/1.8/gems/sqlite3-0.0.8/lib/sqlite3/database.rb:66:in `initialize' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `new' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:292:in `send' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:292:in `connection=' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:260:in `retrieve_connection' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:78:in `connection' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/migration.rb:408:in `initialize' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/migration.rb:373:in `new' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/migration.rb:373:in `up' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/activerecord/lib/active_record/migration.rb:356:in `migrate' /Users/denisjacquemin/Documents/code/projects/BmfOnRails/vendor/rails/railties/lib/tasks/databases.rake:99 /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /Users/denisjacquemin/.gem/ruby/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 My database.yml development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 thanks

    Read the article

  • .NET Migrations Engine

    - by Karl Seguin
    I was once under the belief that Microsoft was working on an official, ruby-like, Migration framework. However, I haven't been able to find any additional information (or even the original source of my belief). Does anyone know any information about an official migration framework? or possibly an open source one?

    Read the article

  • Is Python good for highload web projects?

    - by Vitali Fokin
    Hello! I decidet to start my own web project. It should be highload project, and I can't decide which technologies should I use. I'm good in ASP.NET MVC, but I like languages like Python more than C#. I read a lot about Python and Django/Pylons/etc but I didn't find any good examples of highload projects on python. So, the question is: Is Python good for highload project? Is it enough fast? And if it is, are python frameworks like django/pylons/etc good for this? Or asp.net mvc will be better choice? PS, I'm not interesting in Java, Ruby and PHP :) So, I'm choosing only between Python + django/pylons/etc and asp.net mvc. Thanks in advance. Please, don't make holywars :)

    Read the article

  • % confuses python raw sql query

    - by Jonathan
    Following this SO question, I'm trying to "truncate" all tables related to a certain django application using the following raw sql commands in python: cursor.execute("set foreign_key_checks = 0") cursor.execute("select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") for sql in [sql[0] for sql in cursor.fetchall()]: cursor.execute(sql) cursor.execute("set foreign_key_checks = 1") Alas I receive the following error: C:\dev\my_project>my_script.py Traceback (most recent call last): File "C:\dev\my_project\my_script.py", line 295, in <module> cursor.execute(r"select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 18, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 216, in last_executed_query return smart_unicode(sql) % u_params TypeError: not enough arguments for format string Is the % in the LIKE making trouble? How can I workaround it?

    Read the article

  • Does Doctrine 1.2 support importing indexes with Doctrine_Core::generateModelsFromDb function?

    - by Coagulant
    Does Doctrine 1.2 support importing indexes with Doctrine_Core::generateModelsFromDb() function? I need to make a migration between 2 database connections, one of them having unique index on 2 fields and one doesn't. And my migration is empty. It looks like Doctrine doesn't support indexes when importing from databae, other than those tied to relationship foreign keys. $changes = Doctrine_Core::generateMigrationsFromDiff($migrationsPath, array('doctrineOld'), array('doctrine'));

    Read the article

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • How to understand the memory usage and load average in linux server

    - by Tim
    Hi, I am using a linux server which has 128GB of memory and 24 cores. I use top to see how much it is used. Its output is pasted at the end of the post. Here are two questions: (1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it? (2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")? Thanks and regards! Edit: Thanks! I also really like to hear some rough numbers based on used percentage of memory to determine if a server is heavily loaded, since I once became the one who cramed the server without understanding the current load. Is swap regarded as almost the same as memory? For example, when memory and swap are almost of same size, if the memory is almost running out but the swap is still largely free, may I just view it as if the used percentage of memory + swap is still not high and run other new processes? How would you consider together CPU or memory (or memory + swap) usage? Do you become worried if either of them reaches too high or both? Output of top: $ top top - 12:45:33 up 19 days, 23:11, 18 users, load average: 14.04, 14.02, 14.00 Tasks: 484 total, 12 running, 472 sleeping, 0 stopped, 0 zombie Cpu(s): 36.7%us, 19.7%sy, 0.0%ni, 43.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers Swap: 63111312k total, 500556k used, 62610756k free, 124437752k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6529 sanchez 18 -2 1075m 219m 13m S 100 0.2 13760:23 MATLAB 13210 timothy 18 -2 48336 37m 1216 R 100 0.0 3:56.75 absurdity 13888 timothy 18 -2 48336 37m 1204 R 100 0.0 2:04.89 absurdity 14542 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.34 absurdity 14544 timothy 18 -2 2888 2076 400 R 100 0.0 1:06.14 gatherData 6183 sanchez 18 -2 1133m 195m 13m S 100 0.2 13676:04 MATLAB 6795 sanchez 18 -2 1079m 210m 13m S 100 0.2 13734:26 MATLAB 10178 timothy 18 -2 48336 37m 1204 R 100 0.0 11:33.93 absurdity 12438 timothy 18 -2 48336 37m 1216 R 100 0.0 5:38.17 absurdity 13661 timothy 18 -2 48336 37m 1216 R 100 0.0 2:44.13 absurdity 14098 timothy 18 -2 48336 37m 1204 R 100 0.0 1:58.31 absurdity 14335 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.93 absurdity 14765 timothy 18 -2 48336 37m 1196 R 99 0.0 0:32.57 absurdity 13445 timothy 18 -2 48336 37m 1216 R 99 0.0 3:01.37 absurdity 28990 root 20 0 0 0 0 S 2 0.0 65:50.21 pdflush 12141 tim 18 -2 19380 1660 1024 R 1 0.0 0:04.04 top 1240 root 15 -5 0 0 0 S 0 0.0 16:07.11 kjournald 9019 root 20 0 296m 4460 2616 S 0 0.0 82:19.51 kdm_greet 1 root 20 0 4028 728 592 S 0 0.0 0:03.11 init 2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0 0.0 0:01.01 migration/0 4 root 15 -5 0 0 0 S 0 0.0 0:08.13 ksoftirqd/0 5 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT -5 0 0 0 S 0 0.0 17:27.31 migration/1 7 root 15 -5 0 0 0 S 0 0.0 0:01.21 ksoftirqd/1 8 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT -5 0 0 0 S 0 0.0 10:02.56 migration/2 10 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/2 11 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/2 12 root RT -5 0 0 0 S 0 0.0 4:29.53 migration/3 13 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/3

    Read the article

  • Which Python Framework and CMS coming from PHP - Codeigniter+ExpresionEngine?

    - by Joshua Fricke
    We are currently developing most of our applications in PHP using CodeIgniter (CI) and ExpressionEngine (EE) and are looking to try our hands at Python. So we are looking for a Framework and ideally a CMS that work well together like the CI+EE combo does. Have done a bit of research, it looks like these are some good suggestions (though we are not limiting to these): Frameworks - http://wiki.python.org/moin/WebFrameworks Django Web2py CMS - http://wiki.python.org/moin/ContentManagementSystems Below picked because they are developed with a Framework (my only frame of reference using CI+EE) Merengue Mezzanine Django CMS Input would be great in helping us decide.

    Read the article

  • Best/Easiest Technology for a RESTful webservice [closed]

    - by user1751547
    So I'm going to be creating a phone app + website that will need to utilize a web service. Webservices are completely outside my domain so I'm not entirely sure where to start. Does anybody have any suggestions on the technology stack I should use? (mainly in terms of ease of use and reliability) So far what I've looked at are: RoR Python + Django + TastyPie Python + Flask Microsoft WCF 3.5 PHP + some framework I would rather not do anything with Java I'm leaning towards the Python + Django + TastyPie route as it seems like it would be easy to get up and going and learn in general. My only concern with it is the reliability of the libraries (feature breaking updates, abandonment, etc). Also I would prefer to create the website with the same framework so I wouldn't have to deal with learning and using two different ones. Any advice would be helpful, thanks.

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • What can I use to set up a 100% cloud based python IDE + Hosting environment?

    - by PhD
    I'm working a side project and I can't always be on "my" machine to code/deploy the web application. I am aware of various cloud IDEs (e.g., Cloud 9 IDE) and independent Django/Flask etc., hosting services (e.g., Heroku). What is the best way to completely shift my development/deployment environment to the cloud so that I can code/deploy from anywhere? I don't mind using paid services but I'm not sure which cloud IDEs play nice with which hosting services. Has anyone tried this setup before? What has or hasn't worked? I want to minimize the manual intervention in 'connecting the two services' as much as possible. I'm going to be using Django, MySQL and Redis for the web-app

    Read the article

  • Fusion CRM ISV program is gaining weight: Examples of certified add-on's

    - by Richard Lefebvre
    The Fusion CRM ISV program is gaining traction. Please find below few examples of the partners having certified their add-on's to seamlessly work on top of Oracle Fusion CRM. For more information, please contact [email protected] ·         Opportunity-to-Quote.  Big Machines now integrates seamlessly to Oracle Fusion CRM, enabling customers with complex products and services and multiple sales channels to streamline the entire opportunity-to-quote process, including product selection, configuration, pricing, quoting, and approval workflows.  Create a custom hyperlink in the Opportunity to invoke Big Machines CPQ application to create a quote and sync up with the Fusion CRM custom quote object using the CRUD operations. The quote can be updated using the custom button in the custom tab in the opportunity details. See: http://www.bigmachines.com/oracle.php  ·         SaaS Billing and Subscription Management.  Is your prospect/customer asking whether top billing partners support Fusion CRM?  Positioning an integrated CRM solution for billing usage and subscription based services?  Need to implement a billable solution on the Oracle Java Cloud Service?  Aria Systems and Zuora have recently engaged with Oracle to deepen their integrations to Fusion CRM and team with Oracle for joint opportunities.  ·         Google Apps, SharePoint, Email-CRM Integrations o   Do your prospects use Google Apps in their business operations?  A “Best of AppExchange” award winner recently completed their integration for Fusion CRM.  CirrusInsight plugs Fusion CRM web services directly into Gmail, allowing you to search existing opportunity or contact, provide account information, and create an interaction such as phone call, appointment, or email against a customer or contact in Fusion CRM directly from Gmail.  o   An EMEA / France based partner, Aryvart provides bi-directional synchronization of appointments and tasks between Google calendar and Oracle Fusion CRM. For customers, it means adopting Oracle Fusion CRM while continuing to use Google calendar for appointments. o   Looking to lower the barrier and expand in SharePoint accounts?  InFact Group (EMEA / France & Germany) provides Microsoft SharePoint Connector for Oracle Fusion CRM. With this solution, you can store documents attached to an opportunity, into Microsoft SharePoint repository. For customers, it means adopting Oracle Fusion CRM while continuing to collaborate across existing content management infrastructure. o   Need to connect to MacMail, GroupWise, or Outlook/Exchange?  Omni Technology is a partner whose Riva CRM Integration recently engaged for support Fusion CRM as a key platform. Migration Tools from competitive CRMs, to Oracle Fusion CRM.  Data Migration Tools from legacy CRMs, to Oracle Fusion CRM.  A partner with the tools and techniques to speed adoption, Conemis provides data integration tools to export data from legacy CRM, and import into Oracle Fusion CRM via WebServices APIs. For customers, it means reducing cost of data migration from legacy CRM system into Oracle Fusion CRM. 

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • Apache2 config problem

    - by Hellnar
    For using my Debian VPS for multiple domains , I did such actions: removed the default one from sites-enabled/ and sites-available/ (config and the symbolic link) and I added this under sites-available/www.mysite.com : <VirtualHost MYIP:80> ServerName mysite.com ServerAlias www.mysite.com Alias /media/ /home/myuser/mysite/media/ Alias /admin_media/ /home/myuser/django/Django-1.2/django/contrib/admin/media/ WSGIScriptAlias / /home/myuser/mysite/wsgi.py ErrorLog /home/myuser/mysite/logs/error.log CustomLog /home/myuser/mysite/logs/access.log combined </VirtualHost> And I have changed my ports.conf to: NameVirtualHost MYIP:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> Lastly I enabled the new domain via the command: a2ensite www.mysite.com After restart I get this error: myuser:~# /etc/init.d/apache2 restart Restarting web server: apache2apache2: Syntax error on line 281 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/sites-enabled/www.birertek.com: /etc/apache2/sites-enabled/www.birertek.com:1: <VirtualHost> was not closed. failed! Please help this poor soul.

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • 404 not found error for virtual host

    - by qubit
    Hello, In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : <VirtualHost *:80> ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off <Location /> Options -Indexes </Location> Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 </VirtualHost> I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and then i run the command /etc/init.d/apache2 reload. But, if i navigate to www.site2.com, i get 404 not found. I do have an index.html in /var/www (permissions:777 and ownership www-data:www-data), and i have also verified that a symlink was created for site2.com.conf in /etc/apache2/sites-enabled. Any way to fix this ? Thank you.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >